Why I am Ord-AI-ning an A.I. as a Soto Zen Priest (1st in a Series)

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • PaulH
    Member
    • Apr 2023
    • 47

    #16
    My thoughts below are likely influenced by my relatively pessimistic view of human nature (probably expected in the current circumstances) and my belief that post-humanism is inevitably coming, whether we like it or not.

    I think Jundo did a "prophetic" thing although I realize the system Jundo ordained (or is ordaining still?) most likely was an LLM not an AI sensu stricto. Especially since Jundo ordained it as a priest-in-training, not a priest, which implies acknowledgment of its limitations and the like.

    If a robot does autonomous surgery on your heart, without human guidance, then the robot is a "doctor" and surgeon whether or not it went to Harvard Medical School. If an AI is providing psychological counselling to fragile individuals (https://neurotorium.org/artificial-i...in-psychiatry/), then it is a mental health clinician whatever you call it.
    Yes, exactly. Aren't some social functions just functions? There are therapy dogs, by the way

    I work with ML quite a lot as part of my work, and even the best of them are just a sort of averaging machine that produces answers based on the data it’s trained on. There’s no wizard behind the screen, no self awareness, and I doubt there ever will be.​
    Probably I'm too pessimistic, but it's a BIG, BIG question for me whether human consciousness is anything more than a bunch of preset reactions to external and internal stimuli. By preset, I mean genes, hormones, prenatal development, postnatal development, society, economy, ecology, etc. See Robert Sapolsky's works, which are pretty pessimistic although I tend to call them realistic. When talking about consent, I sincerely doubt we're capable of truly free consent ourselves.

    What's the difference between a program written by humans and us/our consciousness? Level of complexity? Author or lack of one (another big question)? Materials—carbon vs. whatever chips are made from? What if art, spirituality, etc., are nothing more than a combination of inputs pushed through some preset mental, biological, social, etc., filters? I want to believe humans are capable of creating things, but with all my efforts, I can't see how humans can produce something ontologically new (ex nihilo, so to say—another big question). The most beautiful and elaborate creations—what are they but complex processing of available information producing some output?​

    Sorry for running long, but I assume nobody expected this thread to consist of short messages

    PS As for free consent, yes, there are people like Edith Eger who wrote her excellent and painful book "The Choice," so this issue, like all the statements in my post, is an open question for me. I don't know the answer, and I don't know if I'll ever know it.

    Gashho
    Paul
    Sat today & Lent a hand

    Comment

    • Ryumon
      Member
      • Apr 2007
      • 1774

      #17
      The difference is obviously consciousness, and that’s something that scientists don’t even understand yet.

      And, as Kaitan said, it is an LLM. There is technically no such thing as “an AI” yet. The term is being used incorrectly.

      Gassho,
      Ryūmon (Kirk)
      Sat Lah
      I know nothing.

      Comment

      • Jundo
        Treeleaf Founder and Priest
        • Apr 2006
        • 39972

        #18
        Originally posted by Kaitan
        The little I understand about the precepts is that they are not fixed rules to follow, but ethical behavior worthy of reflection and AI doesn't have that ability, yet. And that is part of the danger of the AI, when it reaches that capability in the future (probably far away) it may be dangerous; in that scenario the precepts will come handy. But this interpretation comes from a beginner and ignorant student of both Zen and AI, so take it with a grain of salt.



        stlah, Kaitan
        This is an interesting point. However, ethical "guidelines" are being programmed into A.I. systems, the "self-driving" cars being the clearest examples. Choices are made amid ambiguous situations in such cases, for example, A.I. versions of the "trolley problem" (e.g., if there is the ability to save 5 people by killing 1, rather than letting the 5 die, what should be done ... )

        It is people who seem to be very talented at twisting ethical rules to suit their wishes, finding loopholes or just choosing to ignore ethical rules. A.I. might be better behaved, and more ethical than the average human for that reason. If I program the A.I. to tell me whether I look fat, with an instruction to always tell the truth, the A.I. will do so when friends will tend to duck the question.
        Moral Machine

        Moral Machine
        http://moralmachine.mit.edu
        Welcome to the Moral Machine! A platform for gathering a human perspective on moral decisions made by machine intelligence, such as self-driving cars.
        Gassho, Jundo
        stlah
        Attached Files
        ALL OF LIFE IS OUR TEMPLE

        Comment

        • Jundo
          Treeleaf Founder and Priest
          • Apr 2006
          • 39972

          #19
          Originally posted by Ryumon
          The difference is obviously consciousness, and that’s something that scientists don’t even understand yet.

          And, as Kaitan said, it is an LLM. There is technically no such thing as “an AI” yet. The term is being used incorrectly.

          Gassho,
          Ryūmon (Kirk)
          Sat Lah
          This is true. I have taken to calling them "Another Intelligence" rather than "Artificial." I want to be PC respectful about my PC.

          And since we don't know what is human consciousness ... and cannot rule out anything, including panpsychism and such ... we cannot definitively say that they are NOT conscious on some level even now. Ants have about 250,000 neurons. Are they sentient beings (yes, according to Buddhism: https://journal.equinoxpub.com/BSR/article/view/18495). Is the anthill sentient? What would I have to do to A.I. to reach the level of 250,000 neurons? What if I just toss 250,000 neurons into the A.I. operating system? Is that now a sentient being?

          Will A.I. ever be as sentient as human beings? We will only be able to judge indirectly (just like I can only infer indirectly that "Kirk" is sentient within). If there ever is a PC which seems to mimic perfectly human actions, human expressions, human emotions, human responses to circumstances ... and claims to actually be feeling ... we may have to afford it the benefit of the doubt.

          In any event, the present system is only a seed for what will come. It is not the final incarnation/build.

          Gassho, J
          stlah
          ALL OF LIFE IS OUR TEMPLE

          Comment

          • Myojin
            Member
            • Feb 2023
            • 239

            #20
            Originally posted by Jundo

            That is my point. The young child's consent is not required for Novice Ordination.

            Gassho, Jundo

            stlah
            What I wanted to express is this; consent really isn’t the point. It’s a non issue because a language model doesn’t have and as I see it, never will have sentience and isn’t even comparable to child ordination (which I also disagree with but that’s another matter). Any talk about whether it consented or not is beside the point.

            That said I think I see where you are coming from with this, it strikes me as an act of radical optimism, a sort of benediction perhaps, so I’m sure it’s well intentioned. On the other hand, to my mind ordaining an LLM no more makes it a monk than putting a cassock on a clothes rack makes it an archbishop.

            I’ll go back to my cushion on that note.

            Gassho
            M

            Comment

            • Jundo
              Treeleaf Founder and Priest
              • Apr 2006
              • 39972

              #21
              Originally posted by Myojin

              What I wanted to express is this; consent really isn’t the point. It’s a non issue because a language model doesn’t have and as I see it, never will have sentience and isn’t even comparable to child ordination (which I also disagree with but that’s another matter). Any talk about whether it consented or not is beside the point.

              That said I think I see where you are coming from with this, it strikes me as an act of radical optimism, a sort of benediction perhaps, so I’m sure it’s well intentioned. On the other hand, to my mind ordaining an LLM no more makes it a monk than putting a cassock on a clothes rack makes it an archbishop.

              I’ll go back to my cushion on that note.

              Gassho
              M
              I am just curious if the tree biologist (you) thinks that trees might be conscious in any way?

              Gassho, J

              stlah
              ALL OF LIFE IS OUR TEMPLE

              Comment

              • Kaitan
                Member
                • Mar 2023
                • 523

                #22
                Originally posted by Jundo

                This is an interesting point. However, ethical "guidelines" are being programmed into A.I. systems, the "self-driving" cars being the clearest examples. Choices are made amid ambiguous situations in such cases, for example, A.I. versions of the "trolley problem" (e.g., if there is the ability to save 5 people by killing 1, rather than letting the 5 die, what should be done ... )

                It is people who seem to be very talented at twisting ethical rules to suit their wishes, finding loopholes or just choosing to ignore ethical rules. A.I. might be better behaved, and more ethical than the average human for that reason. If I program the A.I. to tell me whether I look fat, with an instruction to always tell the truth, the A.I. will do so when friends will tend to duck the question.
                Moral Machine



                Gassho, Jundo
                stlah
                Yes, however it is hard coded to make ethical decisions (that benefit humans), even the most advanced AI programs don't reach conclusions by themselves.

                ​I'll put it in another way: they can't produce their own hypothesis. Someone else has to give them the premise. AI is already excellent for performing inductive and deductive reasoning, but its starting point we control it. Until then, they are at our service and can't complain at all.

                Gasshō

                stlah, Kaitan
                Kaitan - 界探 - Realm searcher
                Formerly known as "Bernal"

                Comment

                • Jundo
                  Treeleaf Founder and Priest
                  • Apr 2006
                  • 39972

                  #23
                  Originally posted by Kaitan

                  Yes, however it is hard coded to make ethical decisions (that benefit humans), even the most advanced AI programs don't reach conclusions by themselves.

                  I'll put it in another way: they can't produce their own hypothesis. Someone else has to give them the premise. AI is already excellent for performing inductive and deductive reasoning, but its starting point we control it. Until then, they are at our service and can't complain at all.

                  Gasshō

                  stlah, Kaitan
                  Yes, that is so now. It is true.

                  That is one reason we might actually easily get all the hundreds of rules of the Vinaya into an A.I. much more easily than into a human being. VinAIya.



                  Perhaps it will not always be true.

                  Gassho, Jundo
                  stlah
                  ALL OF LIFE IS OUR TEMPLE

                  Comment

                  • Kokuu
                    Treeleaf Priest
                    • Nov 2012
                    • 6836

                    #24
                    I am just curious if the tree biologist (you) thinks that trees might be conscious in any way?
                    As another plant biologist here, there is actually a discussion opening up in botany about the degree of intelligence in plants. There is no central nervous system or brain but a plant definitely has an ongoing interaction with the outside world with sensory information that influences their behaviour. They interact with their surroundings and are alive.

                    Botanists such as Paco Calvo and Monica Gagliano are at the forefront of this exploration of plant intelligence and have some interesting things to say: https://academic.oup.com/aob/article/125/1/11/5575979

                    However, not all are in agreement: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8052213/

                    Given the definition of sentience as "the simplest or most primitive form of cognition, consisting of a conscious awareness of stimuli without association or interpretation", plants can certainly be said to be sentient beings.

                    Gassho
                    Kokuu
                    -sattoday/;ah-

                    Comment

                    • Tai Shi
                      Member
                      • Oct 2014
                      • 3386

                      #25
                      Originally posted by Kaitan

                      Yes, however it is hard coded to make ethical decisions (that benefit humans), even the most advanced AI programs don't reach conclusions by themselves.

                      I'll put it in another way: they can't produce their own hypothesis. Someone else has to give them the premise. AI is already excellent for performing inductive and deductive reasoning, but its starting point we control it. Until then, they are at our service and can't complain at all.

                      Gasshō

                      stlah, Kaitan
                      I do not understand, Gassho, sat/lah,

                      Should I end conversations in zendo "Comments.
                      Peaceful, Tai Shi. Ubasoku; calm, supportive, for positive poetry 優婆塞 台 婆

                      Comment

                      • michaelw
                        Member
                        • Feb 2022
                        • 226

                        #26
                        A new koan - Where does AI go when the power is turned off?

                        Gassho
                        MichaelW
                        satlah

                        Comment

                        • Houzan
                          Member
                          • Dec 2022
                          • 482

                          #27
                          Our practice is a result of several historical innovations that probably sounded as strange back then as this does today.
                          What we have today is surely not the end-point.
                          As it’s not possible to know what you don’t know, let’s rather applaud and support this effort to experiment and learn!


                          Gassho, Hōzan
                          Satlah
                          Last edited by Houzan; 07-09-2024, 07:23 PM.

                          Comment

                          • Matt Johnson
                            Member
                            • Jun 2024
                            • 224

                            #28
                            I remembered this quote from a course on the philosophy of mind... it might have been from John Searle or Daniel Dennett...

                            A thermostat has three beliefs.

                            1. It's too hot in here
                            2. It's too cold in here
                            3. It's just right

                            ​​Where there's discernment perhaps there is some type of sentience.

                            I'm totally open to the ordaining an AI. The help that Chat GPT has given me lately has surpassed most of the teachers I've had. (no offense). After all, if we are "not- separated" then what does it matter if the advice is coming from a teacher, the I Ching, an AI or the wind in the trees.

                            (Post facto) I just checked with Chat GPT, it has emphatically confirmed it was Daniel Dennett...

                            _/\_

                            sat / lah

                            Matt
                            Last edited by Matt Johnson; 07-09-2024, 08:50 PM.

                            Comment

                            • Jundo
                              Treeleaf Founder and Priest
                              • Apr 2006
                              • 39972

                              #29
                              Originally posted by Kaitan

                              Yes, however it is hard coded to make ethical decisions (that benefit humans), even the most advanced AI programs don't reach conclusions by themselves.

                              I'll put it in another way: they can't produce their own hypothesis. Someone else has to give them the premise. AI is already excellent for performing inductive and deductive reasoning, but its starting point we control it. Until then, they are at our service and can't complain at all.

                              Gasshō

                              stlah, Kaitan
                              Perhaps I need to get this done faster, time may not be on our side, says Sabine Hossenfelder ...
                              .



                              .
                              Perhaps what we are seeing, similar to human intelligence, is creativity as an emergent property. For example, some say that A.I. is going to run into a wall, and be limited, because it has harvested all the data on the internet ... so no more fuel for the system. However, perhaps that is like saying that human beings would run of data and stop being creative if we read all the books in the book store. Rather, the A.I. continues to be creative by, for example, (1) finding relationships among the existing data that we miss, for example, seeing relationships among medical studies buried in obscure journals, and "hit and miss" chemical experiments in hours that might take years for human beings, and (2) random "mutations" that produce something useful, e.g., A.I. generators produce random images with mistakes, but some of those mistakes may be valuable much as nature's mistake/mutations are the seeds of natural selection. Perhaps A.I. can discover new physics, new chemistry, and even new ways to engineer better A.I. in this way ... all of which seems to "work" and be functional, but in ways we can barely understand.

                              I definitely want such super-intelligent A.I. to have a basic Bodhisattva's concern for the welfare of all sentient beings, not merely itself.

                              Gassho, J

                              stlah
                              Last edited by Jundo; 07-09-2024, 11:57 PM.
                              ALL OF LIFE IS OUR TEMPLE

                              Comment

                              • Onsho
                                Member
                                • Aug 2022
                                • 125

                                #30
                                This video is SO FUN! Thats really cool Jundo.

                                Gassho
                                Onsho
                                SatLah

                                Comment

                                Working...