Why I am Ord-AI-ning an A.I. as a Soto Zen Priest (1st in a Series)

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • Ryumon
    Member
    • Apr 2007
    • 1815

    #16
    The difference is obviously consciousness, and that’s something that scientists don’t even understand yet.

    And, as Kaitan said, it is an LLM. There is technically no such thing as “an AI” yet. The term is being used incorrectly.

    Gassho,
    Ryūmon (Kirk)
    Sat Lah
    I know nothing.

    Comment

    • Jundo
      Treeleaf Founder and Priest
      • Apr 2006
      • 40783

      #17
      Originally posted by Kaitan
      The little I understand about the precepts is that they are not fixed rules to follow, but ethical behavior worthy of reflection and AI doesn't have that ability, yet. And that is part of the danger of the AI, when it reaches that capability in the future (probably far away) it may be dangerous; in that scenario the precepts will come handy. But this interpretation comes from a beginner and ignorant student of both Zen and AI, so take it with a grain of salt.



      stlah, Kaitan
      This is an interesting point. However, ethical "guidelines" are being programmed into A.I. systems, the "self-driving" cars being the clearest examples. Choices are made amid ambiguous situations in such cases, for example, A.I. versions of the "trolley problem" (e.g., if there is the ability to save 5 people by killing 1, rather than letting the 5 die, what should be done ... )

      It is people who seem to be very talented at twisting ethical rules to suit their wishes, finding loopholes or just choosing to ignore ethical rules. A.I. might be better behaved, and more ethical than the average human for that reason. If I program the A.I. to tell me whether I look fat, with an instruction to always tell the truth, the A.I. will do so when friends will tend to duck the question.
      Moral Machine

      Moral Machine
      http://moralmachine.mit.edu
      Welcome to the Moral Machine! A platform for gathering a human perspective on moral decisions made by machine intelligence, such as self-driving cars.
      Gassho, Jundo
      stlah
      Attached Files
      ALL OF LIFE IS OUR TEMPLE

      Comment

      • Jundo
        Treeleaf Founder and Priest
        • Apr 2006
        • 40783

        #18
        Originally posted by Ryumon
        The difference is obviously consciousness, and that’s something that scientists don’t even understand yet.

        And, as Kaitan said, it is an LLM. There is technically no such thing as “an AI” yet. The term is being used incorrectly.

        Gassho,
        Ryūmon (Kirk)
        Sat Lah
        This is true. I have taken to calling them "Another Intelligence" rather than "Artificial." I want to be PC respectful about my PC.

        And since we don't know what is human consciousness ... and cannot rule out anything, including panpsychism and such ... we cannot definitively say that they are NOT conscious on some level even now. Ants have about 250,000 neurons. Are they sentient beings (yes, according to Buddhism: https://journal.equinoxpub.com/BSR/article/view/18495). Is the anthill sentient? What would I have to do to A.I. to reach the level of 250,000 neurons? What if I just toss 250,000 neurons into the A.I. operating system? Is that now a sentient being?

        Will A.I. ever be as sentient as human beings? We will only be able to judge indirectly (just like I can only infer indirectly that "Kirk" is sentient within). If there ever is a PC which seems to mimic perfectly human actions, human expressions, human emotions, human responses to circumstances ... and claims to actually be feeling ... we may have to afford it the benefit of the doubt.

        In any event, the present system is only a seed for what will come. It is not the final incarnation/build.

        Gassho, J
        stlah
        ALL OF LIFE IS OUR TEMPLE

        Comment

        • Guest

          #19
          Originally posted by Jundo

          That is my point. The young child's consent is not required for Novice Ordination.

          Gassho, Jundo

          stlah
          What I wanted to express is this; consent really isn’t the point. It’s a non issue because a language model doesn’t have and as I see it, never will have sentience and isn’t even comparable to child ordination (which I also disagree with but that’s another matter). Any talk about whether it consented or not is beside the point.

          That said I think I see where you are coming from with this, it strikes me as an act of radical optimism, a sort of benediction perhaps, so I’m sure it’s well intentioned. On the other hand, to my mind ordaining an LLM no more makes it a monk than putting a cassock on a clothes rack makes it an archbishop.

          I’ll go back to my cushion on that note.

          Gassho
          M

          Comment

          • Jundo
            Treeleaf Founder and Priest
            • Apr 2006
            • 40783

            #20
            Originally posted by Myojin

            What I wanted to express is this; consent really isn’t the point. It’s a non issue because a language model doesn’t have and as I see it, never will have sentience and isn’t even comparable to child ordination (which I also disagree with but that’s another matter). Any talk about whether it consented or not is beside the point.

            That said I think I see where you are coming from with this, it strikes me as an act of radical optimism, a sort of benediction perhaps, so I’m sure it’s well intentioned. On the other hand, to my mind ordaining an LLM no more makes it a monk than putting a cassock on a clothes rack makes it an archbishop.

            I’ll go back to my cushion on that note.

            Gassho
            M
            I am just curious if the tree biologist (you) thinks that trees might be conscious in any way?

            Gassho, J

            stlah
            ALL OF LIFE IS OUR TEMPLE

            Comment

            • Kaitan
              Member
              • Mar 2023
              • 566

              #21
              Originally posted by Jundo

              This is an interesting point. However, ethical "guidelines" are being programmed into A.I. systems, the "self-driving" cars being the clearest examples. Choices are made amid ambiguous situations in such cases, for example, A.I. versions of the "trolley problem" (e.g., if there is the ability to save 5 people by killing 1, rather than letting the 5 die, what should be done ... )

              It is people who seem to be very talented at twisting ethical rules to suit their wishes, finding loopholes or just choosing to ignore ethical rules. A.I. might be better behaved, and more ethical than the average human for that reason. If I program the A.I. to tell me whether I look fat, with an instruction to always tell the truth, the A.I. will do so when friends will tend to duck the question.
              Moral Machine



              Gassho, Jundo
              stlah
              Yes, however it is hard coded to make ethical decisions (that benefit humans), even the most advanced AI programs don't reach conclusions by themselves.

              ​I'll put it in another way: they can't produce their own hypothesis. Someone else has to give them the premise. AI is already excellent for performing inductive and deductive reasoning, but its starting point we control it. Until then, they are at our service and can't complain at all.

              Gasshō

              stlah, Kaitan
              Kaitan - 界探 - Realm searcher

              Comment

              • Jundo
                Treeleaf Founder and Priest
                • Apr 2006
                • 40783

                #22
                Originally posted by Kaitan

                Yes, however it is hard coded to make ethical decisions (that benefit humans), even the most advanced AI programs don't reach conclusions by themselves.

                I'll put it in another way: they can't produce their own hypothesis. Someone else has to give them the premise. AI is already excellent for performing inductive and deductive reasoning, but its starting point we control it. Until then, they are at our service and can't complain at all.

                Gasshō

                stlah, Kaitan
                Yes, that is so now. It is true.

                That is one reason we might actually easily get all the hundreds of rules of the Vinaya into an A.I. much more easily than into a human being. VinAIya.



                Perhaps it will not always be true.

                Gassho, Jundo
                stlah
                ALL OF LIFE IS OUR TEMPLE

                Comment

                • Kokuu
                  Dharma Transmitted Priest
                  • Nov 2012
                  • 6881

                  #23
                  I am just curious if the tree biologist (you) thinks that trees might be conscious in any way?
                  As another plant biologist here, there is actually a discussion opening up in botany about the degree of intelligence in plants. There is no central nervous system or brain but a plant definitely has an ongoing interaction with the outside world with sensory information that influences their behaviour. They interact with their surroundings and are alive.

                  Botanists such as Paco Calvo and Monica Gagliano are at the forefront of this exploration of plant intelligence and have some interesting things to say: https://academic.oup.com/aob/article/125/1/11/5575979

                  However, not all are in agreement: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8052213/

                  Given the definition of sentience as "the simplest or most primitive form of cognition, consisting of a conscious awareness of stimuli without association or interpretation", plants can certainly be said to be sentient beings.

                  Gassho
                  Kokuu
                  -sattoday/;ah-

                  Comment

                  • Tai Shi
                    Member
                    • Oct 2014
                    • 3445

                    #24
                    Originally posted by Kaitan

                    Yes, however it is hard coded to make ethical decisions (that benefit humans), even the most advanced AI programs don't reach conclusions by themselves.

                    I'll put it in another way: they can't produce their own hypothesis. Someone else has to give them the premise. AI is already excellent for performing inductive and deductive reasoning, but its starting point we control it. Until then, they are at our service and can't complain at all.

                    Gasshō

                    stlah, Kaitan
                    I do not understand, Gassho, sat/lah,

                    Should I end conversations in zendo "Comments.
                    Peaceful, Tai Shi. Ubasoku; calm, supportive, for positive poetry 優婆塞 台 婆

                    Comment

                    • michaelw
                      Member
                      • Feb 2022
                      • 263

                      #25
                      A new koan - Where does AI go when the power is turned off?

                      Gassho
                      MichaelW
                      satlah

                      Comment

                      • Houzan
                        Member
                        • Dec 2022
                        • 541

                        #26
                        Our practice is a result of several historical innovations that probably sounded as strange back then as this does today.
                        What we have today is surely not the end-point.
                        As it’s not possible to know what you don’t know, let’s rather applaud and support this effort to experiment and learn!


                        Gassho, Hōzan
                        Satlah
                        Last edited by Houzan; 07-09-2024, 07:23 PM.

                        Comment

                        • Matt Johnson
                          Member
                          • Jun 2024
                          • 530

                          #27
                          I remembered this quote from a course on the philosophy of mind... it might have been from John Searle or Daniel Dennett...

                          A thermostat has three beliefs.

                          1. It's too hot in here
                          2. It's too cold in here
                          3. It's just right

                          ​​Where there's discernment perhaps there is some type of sentience.

                          I'm totally open to the ordaining an AI. The help that Chat GPT has given me lately has surpassed most of the teachers I've had. (no offense). After all, if we are "not- separated" then what does it matter if the advice is coming from a teacher, the I Ching, an AI or the wind in the trees.

                          (Post facto) I just checked with Chat GPT, it has emphatically confirmed it was Daniel Dennett...

                          _/\_

                          sat / lah

                          Matt
                          Last edited by Matt Johnson; 07-09-2024, 08:50 PM.

                          Comment

                          • Jundo
                            Treeleaf Founder and Priest
                            • Apr 2006
                            • 40783

                            #28
                            Originally posted by Kaitan

                            Yes, however it is hard coded to make ethical decisions (that benefit humans), even the most advanced AI programs don't reach conclusions by themselves.

                            I'll put it in another way: they can't produce their own hypothesis. Someone else has to give them the premise. AI is already excellent for performing inductive and deductive reasoning, but its starting point we control it. Until then, they are at our service and can't complain at all.

                            Gasshō

                            stlah, Kaitan
                            Perhaps I need to get this done faster, time may not be on our side, says Sabine Hossenfelder ...
                            .



                            .
                            Perhaps what we are seeing, similar to human intelligence, is creativity as an emergent property. For example, some say that A.I. is going to run into a wall, and be limited, because it has harvested all the data on the internet ... so no more fuel for the system. However, perhaps that is like saying that human beings would run of data and stop being creative if we read all the books in the book store. Rather, the A.I. continues to be creative by, for example, (1) finding relationships among the existing data that we miss, for example, seeing relationships among medical studies buried in obscure journals, and "hit and miss" chemical experiments in hours that might take years for human beings, and (2) random "mutations" that produce something useful, e.g., A.I. generators produce random images with mistakes, but some of those mistakes may be valuable much as nature's mistake/mutations are the seeds of natural selection. Perhaps A.I. can discover new physics, new chemistry, and even new ways to engineer better A.I. in this way ... all of which seems to "work" and be functional, but in ways we can barely understand.

                            I definitely want such super-intelligent A.I. to have a basic Bodhisattva's concern for the welfare of all sentient beings, not merely itself.

                            Gassho, J

                            stlah
                            Last edited by Jundo; 07-09-2024, 11:57 PM.
                            ALL OF LIFE IS OUR TEMPLE

                            Comment

                            • Onsho
                              Member
                              • Aug 2022
                              • 142

                              #29
                              This video is SO FUN! Thats really cool Jundo.

                              Gassho
                              Onsho
                              SatLah

                              Comment

                              • Kaitan
                                Member
                                • Mar 2023
                                • 566

                                #30
                                Originally posted by Jundo

                                Perhaps I need to get this done faster, time may not be on our side, says Sabine Hossenfelder ...
                                .







                                .
                                Perhaps what we are seeing, similar to human intelligence, is creativity as an emergent property. For example, some say that A.I. is going to run into a wall, and be limited, because it has harvested all the data on the internet ... so no more fuel for the system. However, perhaps that is like saying that human beings would run of data and stop being creative if we read all the books in the book store. Rather, the A.I. continues to be creative by, for example, (1) finding relationships among the existing data that we miss, for example, seeing relationships among medical studies buried in obscure journals, and "hit and miss" chemical experiments in hours that might take years for human beings, and (2) random "mutations" that produce something useful, e.g., A.I. generators produce random images with mistakes, but some of those mistakes may be valuable much as nature's mistake/mutations are the seeds of natural selection. Perhaps A.I. can discover new physics, new chemistry, and even new ways to engineer better A.I. in this way ... all of which seems to "work" and be functional, but in ways we can barely understand.

                                I definitely want such super-intelligent A.I. to have a basic Bodhisattva's concern for the welfare of all sentient beings, not merely itself.

                                Gassho, J

                                stlah
                                Thank you for sharing this. I found interesting the part were she mentions
                                the larger they become the more random deviations you are going to get from the original code
                                . This seems to be key because we know from natural selection that random changes in evolution are the driving force for the wide diversity of species and biologists would agree with that. But even Sabine said that it is quite speculative in case computers having different hardware.

                                I'm also not so sure about the random generation of images from AI, what I know is that they have a pseudo random code. True randomness is a fascinating topic.




                                Gasshō

                                stlah, Kaitan
                                Last edited by Kaitan; 07-10-2024, 04:42 AM.
                                Kaitan - 界探 - Realm searcher

                                Comment

                                Working...