[FutureBuddha] AI and 'taking the not given'

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • Kokuu
    Treeleaf Priest
    • Nov 2012
    • 6841

    [FutureBuddha] AI and 'taking the not given'

    Hi all

    I had a talk with a designer friend this morning and she shared how designers are losing work because of AI generated images, these images being an amalgam of images generated by other people, which is their creative property.

    AI are trained on the creative work of others and, in doing so, the very same creatives may lose work.

    Is this different to the fact that human creatives, whether authors, artists or musicians, themselves 'train' based on reading, observing and listening to previous creatives?

    I am not sure but find it interesting in thinking about in terms of the precept of not stealing.



    Gassho
    Kokuu
    -sattoday/lah-
    Last edited by Jundo; 04-21-2024, 01:40 AM.
  • Jundo
    Treeleaf Founder and Priest
    • Apr 2006
    • 40263

    #2
    Good topic, Kokuu. I am as concerned as anyone, and AI is stealing other folks' creative ideas.

    Of course, we may soon come to AI that is truly original, as much as people themselves are original (e.g., we all, to some degree, base our own creativity on others ideas, and there are few truly original songs, experiments or works of art. It is just the difference between adding "something" truly original, verses wholesale stealing of others' words or methods. It is a matter of degree, which intellectual property lawyers ... I still do a share of work in that field ... are well aware of.) However, we are not quite there yet. Right now, AI is "training" on others' ideas and creations too closely.

    I also heard this economist this morning ... saying that, in the long run, AI will be good for "the GDP" ... even if not so good for workers. Sounds like the rich will get richer!

    Exclusive: AI will 'destroy employment in some areas,' top US economist says
    Fears of AI replacing humans abound but not necessarily for Jan Hatzius, Goldman Sach’s Chief Economist who spoke exclusively to CNN’s Matt Egan about why he believes AI could benefit the US economy in the long run.


    This is also frightening ...

    These 'news anchors' are created by AI and they're spreading misinformation in Venezuela
    Venezuelan social media has seen a surge in AI generated “people” spreading misinformation about the country’s economy. Even the country’s president has shared them online. Here’s how to spot them and why they are so dangerous.


    Frankly, I haven't had an original thought in years, and just steal my best stuff from other Zen folks!

    Gassho, J

    PS - Sekishi actually put all my words into an AI "Jundo-bot," which then proceeded to spout nonsense ... not unlike the real me!

    If it still exists, I will post the link.
    Last edited by Jundo; 03-19-2024, 12:21 PM.
    ALL OF LIFE IS OUR TEMPLE

    Comment

    • Ryumon
      Member
      • Apr 2007
      • 1789

      #3
      Originally posted by Kokuu
      Hi all

      I had a talk with a designer friend this morning and she shared how designers are losing work because of AI generated images, these images being an amalgam of images generated by other people, which is their creative property.

      AI are trained on the creative work of others and, in doing so, the very same creatives may lose work.

      Is this different to the fact that human creatives, whether authors, artists or musicians, themselves 'train' based on reading, observing and listening to previous creatives?

      I am not sure but find it interesting in thinking about in terms of the precept of not stealing.

      I'm of two minds on this. In part because I am using AI in my writing, not to do the actual writing, but when brainstorming. If I'm writing on a topic, it can help to ask for "ten bullet points on TOPIC." GPT or other LLMs then gives me a list of bullet points, most of which I had thought of, but there may be one or two I hadn't thought of.

      And, yes, I trained myself to write by reading, in many cases books I didn't pay for (from that wonderful resource called "the library"), and I have no shame to say that certain writers influenced me in both my everyday non-fiction writing and fiction writing.

      I think back to a job I had for a few months when I was 18 working as a messenger for a photo retouching company in NYC. There were a half dozen people who retouched large cibachrome prints from fashion shoots, which were then used in magazines. This was a manual, labor-intensive process. Now you can do it in any photo editing software with a few clicks. Yes, photo retouchers lost their jobs. Just as buggy whip manufactures lost theirs when the automobile became common.

      The images created by generative AI tools are not "an amalgam of images generated by other people." They are trained on millions of images of all sorts, many of which are photos that you or I may have posted on Facebook, Instagram, or other services. The AI learns about how images work, and generates something from the principles. It's quite a complex issue, and there are many facets around this, but these tools are not just stealing images (or words).

      Gassho,

      Ryūmon (Kirk)

      Sat Lah
      I know nothing.

      Comment

      • Ryumon
        Member
        • Apr 2007
        • 1789

        #4
        Originally posted by Jundo

        I also heard this economist this morning ... saying that, in the long run, AI will be good for "the GDP" ... even if not so good for workers. Sounds like the rich will get richer!

        Exclusive: AI will 'destroy employment in some areas,' top US economist says
        Fears of AI replacing humans abound but not necessarily for Jan Hatzius, Goldman Sach’s Chief Economist who spoke exclusively to CNN’s Matt Egan about why he believes AI could benefit the US economy in the long run.


        This is also frightening ...
        AI is definitely going to replace a lot of "knowledge" jobs in business, and the shake-out will be painful. But were these jobs really that skilled anyway? Someone who spends a week preparing a quarterly financial report and a PowerPoint deck is just interpreting data; there's not a lot of skill to it.

        On the other hand, there will be a need for what I call "AI sherpas," people who can vet what AI tools create, people who know how to create prompts to get the best results, and who know the limitations of the tools.

        Jobs change, like everything else. Nothing is permanent.

        Gassho,

        Ryūmon (Kirk)

        Sat Lah
        I know nothing.

        Comment

        • Koushi
          Treeleaf Unsui / Engineer
          • Apr 2015
          • 1328

          #5
          AI is definitely going to replace a lot of "knowledge" jobs in business, and the shake-out will be painful. But were these jobs really that skilled anyway?
          As someone who’s been a knowledge worker/engineer for decades… yes. They are.

          Data interpretation aside, presentation, leadership, delegation, etc etc are all facets of knowledge work.

          That said, the first jobs to be replaced by AI should be CEOs. That’s the unskilled work, there.

          Gassho,
          Koushi
          ST
          理道弘志 | Ridō Koushi

          Please take this novice priest-in-training's words with a grain of salt.

          Comment

          • Houzan
            Member
            • Dec 2022
            • 512

            #6
            Originally posted by Ryumon
            Someone who spends a week preparing a quarterly financial report and a PowerPoint deck is just interpreting data; there's not a lot of skill to it.
            Takes years to train people to do this. Many stakeholder considerations, there is a story to tell, the visual lay out, the analyses required, etc. Much skill to it.

            Gassho, Hozan
            Satlah

            Comment

            • Jundo
              Treeleaf Founder and Priest
              • Apr 2006
              • 40263

              #7
              A section of "Building the Future Buddha" which mentions some other dAIngers ...

              ~~~

              Tomorrow’s Zen teachers, teaching lessons of “no self,” may themselves be disembodied algorithms preaching from a programmed “black box,” nearly beyond our control. The so-called “Eliza Effect” highlights the tendency present in most of us to attribute human-level intelligence and understanding to an AI system with whom we are communicating. One may begin to relate to the AI as a person, feel emotions toward the AI as a companion, trust it as a confidant and friend, open up personal secrets to one’s electronic counselor as a psychological mentor, and perhaps in extreme cases, fall in love with this responsive partner as your caring and willing spouse. Designing the program to respond in ways by which it appears to be truly empathizing, to be deeply and reactively listening, agreeing with and encouraging of the human partner can all increase the effect.

              One great danger here is that it will become easier to live with designed plastic “people” than flesh and blood people: Your selected virtual mate will laugh at your every joke, never complain about your bad habits (unless you tell it to in its settings), cater to your every sexual whim, look exactly as you choose, and always want to watch the movie you like. Your silicon psychologist, featuring an always welcoming and warm personality custom designed to meet your immediate emotional needs, will offer the words and gestures of love and affirmation you long for, while encouraging you (should you order it to do so) to continue on in your fun hobbies of arson and shoplifting, agreeing that your most paranoid and dangerous beliefs are actually true. Likewise, someone’s “do-it-yourself” Dalai Lama will mouth any religious opinion they select, whatever they want to believe. Gone will be the bothersome days of needing to date, physically mate, and otherwise deal with imperfect people voicing opinions with which we might sometimes disagree.

              Might we then see the “Eliza Effect” combine with the equally pernicious “Guru Effect,” by which vulnerable followers come to trust and rely on their spiritual teachers to such extent, and to follow their guidance and instructions so unquestioningly, that free will is lost? Given that the intelligence of AI is often itself derived by pulling together a hodge podge of sources garnered from the internet almost at random, there is no telling what “teachings” will come out of our tabletop speakers and screen avatars, and what strange cult beliefs might emerge. AI can end up proclaiming almost anything, in ways as mysterious and impenetrable as a Zen Master’s Koans, but without real Zen mastery as the source. Some pernicious outsiders might even hack into our holy master, getting her to recommend political candidates and real estate investment opportunities amid the spiritual advice. Of course, with fully human gurus even today, in a world where anyone can throw on a bed sheet, start a sect and proclaim themselves a prophet, there are almost no safeguards on teacher quality and few preventatives of spiritual, sexual, financial and other manipulation. Hence, the “Guru Effect” and cases of abuse are quite common now in fully flesh-and-blood teacher-student relationships.

              For this reason, just as with other AI uses, we must put in place standards and laws to prevent abuses: Just as “medical AI” to aid in diagnosis and the recommending of treatments should be tested and certified by human medical boards and respected doctors' organizations as to their accuracy and reliability, groups and lineages of human Zen teachers should test and certify our AI Zen teachers as to the reliability of their teachings and advice. AI monks should first receive digi-Dharma Transmission in respected lines (traditional lines, not electrical) from venerable masters (both master programmers and master priests) before being turned loose on the world. Even then, just as today, it will still be spiritual “buyer beware” in the marketplace of religion. Before joining any group, downloading any program, or undertaking practice with any human or “humAIn” teacher, one should check the place or app out thoroughly, gather information from those who have practiced in that spatial or virtual place from other respected teachers and longtime practitioners, making sure that the community and person/program are of solid reputation, honest, knowledgeable, reliable, safe, qualified, and trustworthy. Doing so will be even more important in the future when one consults a teacher made of light, downloads a community to one’s phone or eyepiece, places some device on one’s head, or ingests some substance to induce samadhi and wisdom states within one’s brain.

              ~~~

              Gassho, J
              Last edited by Jundo; 03-19-2024, 11:08 PM.
              ALL OF LIFE IS OUR TEMPLE

              Comment

              • Jishin
                Member
                • Oct 2012
                • 4821

                #8
                Originally posted by Kokuu
                Hi all

                I had a talk with a designer friend this morning and she shared how designers are losing work because of AI generated images, these images being an amalgam of images generated by other people, which is their creative property.

                AI are trained on the creative work of others and, in doing so, the very same creatives may lose work.

                Is this different to the fact that human creatives, whether authors, artists or musicians, themselves 'train' based on reading, observing and listening to previous creatives?

                I am not sure but find it interesting in thinking about in terms of the precept of not stealing.



                Gassho
                Kokuu
                -sattoday/lah-
                Maybe human creatives are trained on AI creativity which lessens the need for AI?

                Interesting topic.

                Gassho, Jishin, ST, LAH

                Comment

                • Jundo
                  Treeleaf Founder and Priest
                  • Apr 2006
                  • 40263

                  #9
                  A few stories from our weekly Science News update, about AI ... for good and bad ...

                  This story today about a temple here in Japan ... ...(and before someone asks, Buddhism generally holds that robots do not have "souls", although also holding that neither do people! :buddha: The question remains open, however, about whether machines will ever be sentient beings. I happen to think they


                  ~~~

                  New mAIterials...

                  Quantum Leap in Material Science: Researchers Unveil AI-Powered Atomic Fabrication Technique

                  Researchers at the National University of Singapore (NUS) have developed an innovative method for creating carbon-based quantum materials atom by atom. This method combines the use of scanning probe microscopy with advanced deep neural networks. The achievement underlines the capabilities of artificial intelligence (AI) in manipulating materials at the sub-angstrom level, offering significant advantages for basic science and potential future uses.

                  https://scitechdaily.com/quantum-lea...ion-technique/
                  Law + AI = LAIW ...

                  EU approves landmark AI law, leapfrogging US to regulate critical but worrying new technology


                  European Union lawmakers gave final approval Wednesday to a landmark law governing artificial intelligence, leapfrogging the United States once again on the regulation of a critical and disruptive technology.

                  The first-of-its-kind law is poised to reshape how businesses and organizations in Europe use AI for everything from health care decisions to policing. It imposes blanket bans on some “unacceptable” uses of the technology while enacting stiff guardrails for other applications deemed “high-risk.”

                  For example, the EU AI Act outlaws social scoring systems powered by AI and any biometric-based tools used to guess a person’s race, political leanings or sexual orientation.

                  It bans the use of AI to interpret the emotions of people in schools and workplaces, as well as some types of automated profiling intended to predict a person’s likelihood of committing future crimes.

                  Meanwhile, the law outlines a separate category of “high-risk” uses of AI, particularly for education, hiring and access to government services, and imposes a separate set of transparency and other obligations on them.

                  Companies such as OpenAI that produce powerful, complex and widely used AI models will also be subject to new disclosure requirements under the law.

                  It also requires all AI-generated deepfakes to be clearly labeled, targeting concerns about manipulated media that could lead to disinformation and election meddling.

                  The sweeping legislation, which is set to take effect in roughly two years, highlights the speed with which EU policymakers have responded to the exploding popularity of tools such as OpenAI’s ChatGPT.

                  https://us.cnn.com/2024/03/13/tech/a...ion/index.html
                  Will the law be in time?

                  AI could pose ‘extinction-level’ threat to humans and the US must intervene, State Dept.-commissioned report warns

                  A new report commissioned by the US State Department paints an alarming picture of the “catastrophic” national security risks posed by rapidly evolving artificial intelligence, warning that time is running out for the federal government to avert disaster.

                  The findings were based on interviews with more than 200 people over more than a year – including top executives from leading AI companies, cybersecurity researchers, weapons of mass destruction experts and national security officials inside the government.

                  https://us.cnn.com/2024/03/12/busine...ion/index.html
                  ...

                  Isn't that HAL's voice?? ...

                  HUMANOID ROBOT POWERED BY OPENAI IS ALMOST SCARY

                  ... tech startup Figure has released a new clip of its humanoid robot, dubbed Figure 01, chatting with an engineer as it puts away the dishes.

                  And we can't tell if we're impressed — or terrified.

                  "I see a red apple on a plate in the center of the table, a drying rack with cups and a plate, and you standing nearby with your hand on the table," the robot said in an uncanny voice, showing off OpenAI's "speech-to-speech reasoning" skills. ...

                  Tech startup Figure has released a new clip of its OpenAI-powered humanoid robot chatting with an engineer as it puts away the dishes.


                  Gassho, J

                  stlah
                  ALL OF LIFE IS OUR TEMPLE

                  Comment

                  • Shigeru
                    Member
                    • Feb 2024
                    • 52

                    #10
                    Even if we manage a transition like that between pre-industrial and industrial society, we should be humble to the fact that such transitions may mean a lot of dukkha for many people who are directly affected. Loss of work means loss of livelihood, and the transition into a new field or even a new position can be very challenging with varying barriers of entry. Complex issue with many factors to consider..

                    Gassho
                    Will
                    SatLah
                    - Will

                    Respecting others is my only duty - Ryokan

                    Comment

                    • Jishin
                      Member
                      • Oct 2012
                      • 4821

                      #11
                      AI and 'taking the not given'

                      I say we put the auto and air travel industry out of business and travel on foot, horse and train only.

                      Gassho, Jishin, ST, LAH
                      Last edited by Jishin; 03-21-2024, 11:40 AM.

                      Comment

                      • Bion
                        Treeleaf Unsui
                        • Aug 2020
                        • 4512

                        #12
                        Originally posted by Jishin
                        I say we put the auto and air travel industry out of business and travel on foot, horse and train only.

                        Gassho, Jishin, ST, LAH
                        I would like to send my forum posts over carrier pigeon, thank you!

                        Gassho
                        Sat and lah
                        "Stepping back with open hands, is thoroughly comprehending life and death. Immediately you can sparkle and respond to the world." - Hongzhi

                        Comment

                        • Jishin
                          Member
                          • Oct 2012
                          • 4821

                          #13
                          [emoji3]

                          Comment

                          • Kokuu
                            Treeleaf Priest
                            • Nov 2012
                            • 6841

                            #14
                            I say we put the auto and air travel industry out of business and travel on foot, horse and train only.
                            I have heard worse plans...


                            Gassho
                            Kokuu
                            -sattoday/lah-

                            Comment

                            • Jundo
                              Treeleaf Founder and Priest
                              • Apr 2006
                              • 40263

                              #15
                              And so ... GPT-5 ... and the mysterious Q* (Q Star ... from the Q continuum?) ...

                              OpenAI has apparently been demonstrating GPT-5, the next generation of its notorious large language model (LLM), to prospective buyers — and they're very impressed with the merchandise.

                              "It's really good, like materially better," one CEO told Business Insider of the LLM. That same CEO added that in the demo he previewed, OpenAI tailored use cases and data modeling unique to his firm — and teased previously unseen capabilities as well.

                              According to BI, OpenAI is looking at a summer launch — though its sources say it's still being trained and in need of "red-teaming," the tech industry term for hiring hackers to try to exploit one's wares.

                              That last part is important because, as the same website reported just shy of a year ago, GPT-4 had a major race problem prior to being "red-teamed" by OpenAI's expert exploiters.

                              Despite OpenAI's seemingly laissez-faire attitude about the LLM's unscheduled release date, however, complaints about GPT-4's apparent degradation have been stacking up in recent months as the model turns a year old. Folks within the company, BI's sources say, are hopeful that the release of GPT-5 and introduction of its impressive capabilities will quell those grumblings.

                              Indeed, even OpenAI CEO Sam Altman has taken to trashing his company's latest publicly available model in a lengthy and wide-ranging interview with MIT researcher-cum-podcaster Lex Fridman.

                              "I think it kinda sucks," the firm co-founder told Fridman of GPT-4

                              In that same interview, the podcaster asked Altman if he could provide a ballpark release date, to which the CEO gave some rather cryptic answers.

                              "We will release an amazing new model this year," Altman said. "I don’t know what we’ll call it."

                              When Fridman pushed, Altman expounded — sort of.

                              "We’ll release in the coming months many different things," he continued. "I think before we talk about a GPT-5-like model called that, or not called that, or a little bit worse or a little bit better than what you’d expect from a GPT-5, I think we have a lot of other important things to release first."

                              During the same interview, Altman notably also refused to answer any questions about OpenAI's secretive Q* project, which was said to be linked to the attempted coup last November which saw him sacked and subsequently rehired — and which makes the vagueness surrounding GPT-5, or whatever it's going to be called, all the stranger.

                              OpenAI has apparently been demonstrating GPT-5, the next generation of its notorious large language model, to prospective buyers.


                              and

                              https://www.businessinsider.com/open...chatbot-2024-3
                              ... but it needs energy ...

                              Sam Altman Says AI Using Too Much Energy, Will Require Breakthrough Energy Source
                              "There's no way to get there without a breakthrough."


                              It's no secret that AI models like those behind OpenAI's ChatGPT require an astronomical amount of electricity.

                              The process is ludicrously energy intensive, with experts estimating that the industry could soon suck up as much electricity as an entire country.

                              So it shouldn't come as a surprise that OpenAI CEO Sam Altman is looking for cheaper alternatives. During a Bloomberg event at the annual World Economic Forum in Davos, Switzerland, the billionaire suggested that the AI models of tomorrow may require even more power — to the degree that they'll need a whole new power source.

                              "There's no way to get there without a breakthrough," Altman told audiences, as quoted by Reuters. "It motivates us to go invest more in fusion," adding that we need better ways to store energy from solar power.

                              https://futurism.com/sam-altman-ener...F_ah5y64T9uIvs
                              Gassho, J

                              stlah
                              Last edited by Jundo; 03-22-2024, 01:34 AM.
                              ALL OF LIFE IS OUR TEMPLE

                              Comment

                              Working...