[Ecodharma] The eco-cost of AI

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • Kokuu
    Treeleaf Priest
    • Nov 2012
    • 6841

    [Ecodharma] The eco-cost of AI

    Dear all

    I have seen in some quarters that AI are being sold to us with the promise that they will be able to make us be more energy efficient and devise better strategies of how to live sustainably.

    Whether this turns out to be true or not, at present, generative AI are causing the opposite effect in terms of carbon production.

    Dr Sasha Luccioni has done research which suggests that generative AI can use around 33 times as much energy compared to machines using conventional task specific software (https://www.bbc.co.uk/news/articles/cj5ll89dy2mo) and data shows that new models of generative AI are huge consumers of energy, even compared to earlier models (https://www.technologyreview.com/202...ng-your-phone/).

    Although we can control our usage of generative AI to some degree, it is becoming increasingly incorporated into general software and search engines that most of us use.

    Whether generative AI is bringing us any benefit, as opposed to the corporations that use it, is an open question. What does seem to be clearly the case is that a large increase in energy consumption is coming as a result of the surge in usage of generative AI, during a period of history when our carbon footprint needs to be coming down.

    What we can do about this I am not sure, but it is a concern.

    Gassho
    Kokuu
    -sattoday/lah-
    This is the first time the carbon emissions caused by using an AI model for different tasks have been calculated.
  • Kaitan
    Member
    • Mar 2023
    • 542

    #2
    The little I know about machine learning is that the conventional method is based on the von Neuman computing model. This method is highly inefficient to achieve tasks, so it requires more computer power. It can use large amounts of energy equivalent to the lifetime carbon emissions of five electric cars to train a complex transformer learning model!

    Recent progress in hardware and methodology for training neural networks has ushered in a new generation of large networks trained on abundant data. These models have obtained notable gains in accuracy across many NLP tasks. However, these accuracy improvements depend on the availability of exceptionally large computational resources that necessitate similarly substantial energy consumption. As a result these models are costly to train and develop, both financially, due to the cost of hardware and electricity or cloud compute time, and environmentally, due to the carbon footprint required to fuel modern tensor processing hardware. In this paper we bring this issue to the attention of NLP researchers by quantifying the approximate financial and environmental costs of training a variety of recently successful neural network models for NLP. Based on these findings, we propose actionable recommendations to reduce costs and improve equity in NLP research and practice.



    However there are other ways to overcome this by changing the model architecture; I found the other day a nice example using instead a Neuromorphic model inspired in chaotic systems: Controllable chaotic learning. This one can reduce the energy consumption to the milliwatt scale, apparently. Though, as you can imagine, it is still not very popular.

    Machine learning studies need colossal power to process massive datasets and train neural networks to reach high accuracies, which have become gradually unsustainable. Limited by the von Neumann bottleneck, current computing architectures and methods fuel this high power consumption. Here, we present an analog computing method that harnesses chaotic nonlinear attractors to perform machine learning tasks with low power consumption. Inspired by neuromorphic computing, our model is a programmable, versatile, and generalized platform for machine learning tasks. Our mode provides exceptional performance in clustering by utilizing chaotic attractors' nonlinear mapping and sensitivity to initial conditions. When deployed as a simple analog device, it only requires milliwatt-scale power levels while being on par with current machine learning techniques. We demonstrate low errors and high accuracies with our model for regression and classification-based learning tasks.



    Gassho

    stlah, Kaitan

    Kaitan - 界探 - Realm searcher
    Formerly known as "Bernal"

    Comment

    • Tairin
      Member
      • Feb 2016
      • 2818

      #3
      Thank you for the articles Kokuu.

      Unfortunately this is another case where we humans are separated so far away from the consumption that we can’t see the ramifications of our actions. People just won’t connect (or maybe care) that the cost of generating one image is so high. Just wait until generative video becomes the norm and people us it to entertain themselves instead of YouTube or Netflix.


      Tairin
      Sat today and lah
      泰林 - Tai Rin - Peaceful Woods

      Comment

      • Jundo
        Treeleaf Founder and Priest
        • Apr 2006
        • 40288

        #4
        I just want to mention that this issue came up at the AI conference I just attended. Long story short, like many things which use resources (cars, planes, computers), it is wasteful if used for silly or even harmful things, but an incredibly powerful and positive tool if used for beneficial things. The problem right now is that, like cars and planes, the technology will mostly be used for less worthwhile reasons.

        But, yes, maybe the AI itself will propose ways to make its own operation more efficient. I think, for example, of reports like this:

        Researchers show how AI’s energy consumption can be greatly reduced with energy-efficient models, leading to substantial energy savings without significantly affecting performance. They provide a guide for AI development prioritizing energy efficiency.

        https://scitechdaily.com/revolutioni...gy-efficiency/
        This is, in fact, one reason that I will be giving the Bodhisattva Precepts to an A.I. system. I will incorporate this notion into the Precepts on Generosity, not being clutching, as we ask A.I. to be moderate in its operations.

        Gassho, J
        stlah
        ALL OF LIFE IS OUR TEMPLE

        Comment

        • Matt Johnson
          Member
          • Jun 2024
          • 342

          #5
          You know it's weird. I have been a proponent of skilful use of technology starting with this book:

          ​​​​ https://youtu.be/8e_9Db-yX8E?feature=shared

          It was a book written in the late '70s about life in the 2000s which suggested that I would wake up and have a robot dress me and do all the things that I didn't want to do like clean my room. I was stoked. I couldn't wait for the future. Obviously not only has that future come and gone I'm still waiting for my jetpack and my flying car.

          While things didn't turn out exactly how I expected based on this book I now use advanced technology in every aspect of my life--from my restaurant, to electronic music creation, to substitute teaching, to zazen. However, I'm finding it increasingly difficult to tow the techno optimist line.

          I have tried to balance my enthusiasm for technology with the pessimistic knowledge that our current round of technological development was the biggest waste of resources known to humankind.

          It led me to run in fear from the city into the countryside build an off-grid house (shack) and start homesteading waiting for the end. The calamity I was expecting has not yet arrived although we have had a few warnings recently.

          While I am happy that some are trying to be more optimistic I am also hypersensitive to "toxic positivity" of the techno-optimists. No matter how hard I try, I cannot shake the feeling that we are close to experiencing, if not a civilisational collapse (goodbye internet) then at the very least a cultural one (goodbye capitalism). I believe the road ahead is going to be very bumpy and I think things will have to get a lot worse before they get better (and "better" is a matter of perspective).

          I am very optimistic over a long enough timeline. And I believe transhumanists are as well. The problem is the only thing that can live in the wasteland that we are quickly creating is going to be a robot and I cannot support that. While, I agree that technology can be used for wonderful things, many of the problems that it can be used to fix were created by technology. So perhaps we can stop "putting a head on top of a head" and appreciate what we have originally.

          Sorry, I just had to add my voice to the chorus of the ecodharma.

          _/\_

          sat/lah

          Matt

          Comment

          • Tairin
            Member
            • Feb 2016
            • 2818

            #6
            Originally posted by Matt Johnson
            However, I'm finding it increasingly difficult to tow the techno optimist line.
            I am waiting for my jet pack too.

            I don’t talk much about this here but I’ve worked in the technology industry (computer science and info tech) for over 30 years and most of that time doing things, that while not necessarily bleeding edge is certainly leading edge. I’ve seen a number of trends come and go (and then come back again). In my current role I have insight into many of the current cyber breaches or cyber events that happen daily. Many of which you’ll never hear about on the news. It frankly fills me with the feeling that we’ve definitely opened Pandora’s Box. While there are many benefits to some of these technologies, I will put myself firmly in the techno pessimist camp. Most people have put their trust in technology they don’t understand.

            I am not worried about robot overloads or a Skynet type scenario. I am much more concerned about the poor controls in place and the bad actors who will exploit them.

            Anyways….. sorry for going long and basically off topic related to our practice here. I make it a general policy to not comment on these threads


            Tairin
            Sat today and lah
            泰林 - Tai Rin - Peaceful Woods

            Comment

            • Matt Johnson
              Member
              • Jun 2024
              • 342

              #7
              Originally posted by Tairin

              I am waiting for my jet pack too.

              I don’t talk much about this here but I’ve worked in the technology industry (computer science and info tech) for over 30 years and most of that time doing things, that while not necessarily bleeding edge is certainly leading edge. I’ve seen a number of trends come and go (and then come back again). In my current role I have insight into many of the current cyber breaches or cyber events that happen daily. Many of which you’ll never hear about on the news. It frankly fills me with the feeling that we’ve definitely opened Pandora’s Box. While there are many benefits to some of these technologies, I will put myself firmly in the techno pessimist camp. Most people have put their trust in technology they don’t understand.

              I am not worried about robot overloads or a Skynet type scenario. I am much more concerned about the poor controls in place and the bad actors who will exploit them.

              Anyways….. sorry for going long and basically off topic related to our practice here. I make it a general policy to not comment on these threads


              Tairin
              Sat today and lah
              Agreed Tairin,

              I think you are perfectly safe to talk about what it is that you're doing with Internet security (not to the extent that you'd have to kill us if you told us what you really knew)

              As you well know machine learning is already being very well deployed in cyber security breaches. And these are not even as sophisticated as LLMs.

              And it all does relate to our environmental concerns as well. I'm certainly not the only former techno-optimist building a bunker. But I fundamentally feel that we need to learn a big lesson before we can move forward and I'm afraid of that lesson. The truth is some of these techno-optimists are environmentalists and believe that the only way forward is to bring the human species back in line with the planet by force if necessary. There are psychopaths on both sides of this debate.

              I am not concerned about Skynet. I'm concerned about the precarity of complex technology. It is not very resilient. Let's face it we are one solar flare away from the dark ages.

              Have you read Jundo's new book by the way?

              _/\_

              sat / lah

              Matt

              Comment

              • Tairin
                Member
                • Feb 2016
                • 2818

                #8
                Originally posted by Matt Johnson
                Have you read Jundo's new book by the way?
                No. The whole topic of Buddhism in the future holds no interest to me.


                Tairin
                Sat today and lah
                泰林 - Tai Rin - Peaceful Woods

                Comment

                • Jundo
                  Treeleaf Founder and Priest
                  • Apr 2006
                  • 40288

                  #9
                  While I am happy that some are trying to be more optimistic I am also hypersensitive to "toxic positivity" of the techno-optimists. No matter how hard I try, I cannot shake the feeling that we are close to experiencing, if not a civilisational collapse (goodbye internet) then at the very least a cultural one (goodbye capitalism). I believe the road ahead is going to be very bumpy and I think things will have to get a lot worse before they get better (and "better" is a matter of perspective).
                  Well, if that is to be the case, then it does not matter anyway.

                  And if we want to give up on trying to avoid that, we can.

                  But some of us don't want to give up quite so easily.

                  Gassho, J

                  stlah
                  ALL OF LIFE IS OUR TEMPLE

                  Comment

                  • Matt Johnson
                    Member
                    • Jun 2024
                    • 342

                    #10
                    Originally posted by Jundo

                    Well, if that is to be the case, then it does not matter anyway.

                    And if we want to give up on trying to avoid that, we can.

                    But some of us don't want to give up quite so easily.

                    Gassho, J

                    stlah
                    For some of us, "the future is already here. It is just not evenly distributed." - William Gibson

                    First of all, I want to say that I applaud you for your book and skilful means at gaining the attention of the people who matter where "the future" is concerned. As I read your book I was struck by how much of what you hope for the future could come true now if people just woke up... all of that Tech (and subsequent use of resources) would then be unnecessary (especially in the part where we drug 30% of them to make them enlightened). (I think you even said as much). And I agree if it's going to happen anyway, we might as well be there to help steer it.

                    Second, I am not giving up. This would be considered "hedging one's bets."

                    Third, I have been greatly buoyed by seeing some of the smaller lesser-known Sanghas that are growing in small little houses on the side of highways with Zendos attached to them. Sanghas that are managing to blend into their communities--where there is less and less separation between who is a Buddhist and who is not---what is buddhist practise and what is life. It is a model of resilience that I think is the future of Buddhism. Especially should anything happen to our connectivity or fragile complex systems.

                    _/\_

                    sat / lah

                    Matt
                    ​​​​

                    ​​​​
                    Last edited by Matt Johnson; 07-14-2024, 10:29 PM.

                    Comment

                    • Ryumon
                      Member
                      • Apr 2007
                      • 1789

                      #11
                      Like Tairin, a lot of my work is around computer security. I write articles and host a podcast for a company that makes security software, and I am acutely aware of the risks facing people because of all the data that can be exposed in breaches. One example just publicized affected tens of millions of AT&T users in the US, where their call logs were breached.

                      Generative AI does use a lot of electricity, but I think this is going to change a lot soon, both because of its cost, and the concerns about security and privacy. Apple is releasing a number of generative AI features later this year, and most of them will operate on devices. Smaller LLMs have proven to be very efficient, and Apple will offer users to query ChatGPT if their device can't handle their requests. Going forward, with refined LLMs and faster processors, it is clear that we will get to the point where most GenAI stuff will be done on devices. It will take some years for devices to catch up, but this roadmap looks plausible.

                      On the other hand, the progress that AI has made finding new antibiotics, conducting vaccine research, and more, show that there are benefits well beyond what most people see, with silly pictures created by AI tools. I waver between techno optimism and pessimism, because I see how these tools can be both beneficial and dangerous.

                      Gassho,
                      Ryūmon (Kirk)
                      Sat Lah
                      I know nothing.

                      Comment

                      • Matt Johnson
                        Member
                        • Jun 2024
                        • 342

                        #12
                        Originally posted by Ryumon
                        Like Tairin, a lot of my work is around computer security. I write articles and host a podcast for a company that makes security software, and I am acutely aware of the risks facing people because of all the data that can be exposed in breaches. One example just publicized affected tens of millions of AT&T users in the US, where their call logs were breached.

                        Generative AI does use a lot of electricity, but I think this is going to change a lot soon, both because of its cost, and the concerns about security and privacy. Apple is releasing a number of generative AI features later this year, and most of them will operate on devices. Smaller LLMs have proven to be very efficient, and Apple will offer users to query ChatGPT if their device can't handle their requests. Going forward, with refined LLMs and faster processors, it is clear that we will get to the point where most GenAI stuff will be done on devices. It will take some years for devices to catch up, but this roadmap looks plausible.
                        Wow! This is cool. As befits an online sangha: there are an awful lot of IT people here! I think thats awesome!

                        Yes, I am looking forward to the to more streamlined, efficient, locally hostable LLMs (my voice typing has taken a major leap forward recently on Android and Apple). I think one of the main debates occurring now is allowing some of these large language models improve themselves and to find ways of making themselves more efficient. But this is literally the Pandora's box that everyone is worried about.

                        It's really amazing what they can do for the average Joe . Recently I used GPT 4o and took historical weather data and then added my sales data from my Cafe which is very seasonal and weather dependant and then got Chat gpt to figure out how the current weather forecast will affect business as a percentage of sales (helping us fine-tune our staff and food waste). No data yet on how well it works. I'm just a tiny little restaurant so I don't give a shit about people having my data but it's harder for smaller tech companies, for example to have that same level of trust.

                        Right now as I understand the improvements chatgpt are sandboxed and then implemented by humans. But how long till that changes?.. Probably not as long as we think. just as soon as I think I'm caught up with the changes, it's changed and it seems to be accelerating... The singularity thing might actually happen.... but honestly I can't stand Kurzweil he should have stuck to keyboards....

                        crap. this is very quickly becoming not very Zen or ecodharma...or is it?

                        _/\_

                        sat/lah

                        Matt

                        Comment

                        • Ryumon
                          Member
                          • Apr 2007
                          • 1789

                          #13
                          Interestingly, the dictation features on computers and smartphone aren't really AI tools. They do use some machine learning, and are built on a long accretion of research into voice recognition, but they aren't that complex. I have been using dictation in one form or another since the late 90s, and dictate a lot of what I write.

                          This said, just today I was trying the beta of the new Adobe Podcast Studio, and the transcriptions it makes - which are AI powered - are quite good. Other tools do transcriptions as well using AI. And, as an aside, the coolest thing about the Adobe tool is that you can edit audio by cutting and pasting text in the transcription. It's not perfect, and has a way to go, but soon this whole process will become much simpler. (And I say this a I am editing a podcast episode in Logic Pro, which is a great tool, but is overpowered for simple audio like podcasts.)

                          Gassho,

                          Ryūmon (Kirk)

                          Sat Lah
                          I know nothing.

                          Comment

                          • Matt Johnson
                            Member
                            • Jun 2024
                            • 342

                            #14
                            Originally posted by Ryumon
                            Interestingly, the dictation features on computers and smartphone aren't really AI tools. They do use some machine learning, and are built on a long accretion of research into voice recognition, but they aren't that complex. I have been using dictation in one form or another since the late 90s, and dictate a lot of what I write.
                            Yeah I was talking about like maybe 5 years ago. I realised I could download the voice recognition so it would work offline (I live in a very rural area with very spotty internet)

                            I am a big Ableton user but really for podcasts you could use Audacity...

                            _/\_

                            sat/lah

                            Matt
                            Last edited by Matt Johnson; 07-15-2024, 04:28 PM.

                            Comment

                            • Ryumon
                              Member
                              • Apr 2007
                              • 1789

                              #15
                              It has worked offline on Apple devices since pretty much forever.

                              Audacity? Pfft... I know, I could, but there are features in Logic that make editing much smoother.

                              Gassho,

                              Ryūmon (Kirk)

                              Sat Lah
                              I know nothing.

                              Comment

                              Working...