[FutureBuddha] The Dangers & Ethics of "Genetic Welding" (and AI too)

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • Jundo
    Treeleaf Founder and Priest
    • Apr 2006
    • 40361

    [FutureBuddha] The Dangers & Ethics of "Genetic Welding" (and AI too)



    In my book Building the Future Buddha, I explore whether upcoming technological, medical, genetic and neurological advances may allow us ... while fully respecting all human and civil rights and personal autonomy, upon rigorous clinical testing for safety ... to realize certain ancient Buddhist goals including reduced acts of violence committed in excess rage (we had another tragic school shooting in America yesterday ... ), enhanced human empathy for other sentient beings (which would also help prevent much violence, as well as help mitigate a variety of social harms such as homelessness and poverty, as more of us come to feel intensely and personally the suffering of strangers as we would were the victim our own parent or child or ourself.) Desires running to harmful excess, including many substance addictions, might also be better moderated or alleviated, while more moderate desires would lead to both smaller waistlines and reduced mountains of waste now caused by runaway consumption. We might, perhaps, become 'Better Buddhists through Chemistry," our Bodhisattva drives to help sentient beings enhanced within us, and thus will realize Buddhist ideals in ways imply unavailable to traditional Buddhists of the past.

    However, such measures all presents grave ethical considerations and dangers of the highest order, which must be weighed and debated before we act.

    A new paper looks at some of those considerations:

    With CRISPR-Cas9 technology, humans can now rapidly change the evolutionary course of animals or plants by inserting genes that can easily spread through entire populations. Evolutionary geneticist Asher Cutter proposes that we call this evolutionary meddling “genetic welding.” In an opinion paper published today (March 28) in the journal Trends in Genetics, he argues that we must scientifically and ethically scrutinize the potential consequences of genetic welding before we put it into practice.

    ... “It raises the question of how much should humans intervene into processes that are normally beyond our control,” says Cutter.
    “If ethicists, medical practitioners, and politicians decide that it is acceptable in some cases to edit the germ line of humans, then that would open the possibility that genetic welding could be used as a tool in that regard,” says Cutter. “This would open a much bigger can of worms by virtue of the fact that genetic welding could change the entirety of a population or species, not just a few individuals that elected to have a procedure.”

    Though it might be difficult to experimentally assess the long-term implications of genetic welding, Cutter says that thought experiments, mathematical theory, computer simulations, and conversations with bioethicists could all play important roles, as could experiments in organisms with short lifespans and rapid reproduction.

    Reference: “Synthetic gene drives as an anthropogenic evolutionary force” by Asher D. Cutter, 28 March 2023, Trends in Genetics.
    https://www.cell.com/trends/genetics...3obFc78Qski0t4
    Also, another recent publication looks at ethical questions and other concerns regarding super-intelligent AI ...

    6 Challenges – Identified by Scientists – That Humans Face With Artificial Intelligence

    A professor from the University of Central Florida and 26 other scientists have published a study highlighting the obstacles that humanity must tackle to guarantee that artificial intelligence (AI) is dependable, secure, trustworthy, and aligned with human values. The study was published in the International Journal of Human-Computer Interaction.

    For instance, the coming widespread integration of artificial intelligence could significantly impact human life in ways that are not yet fully understood, says Garibay, who works on AI applications in material and drug design and discovery, and how AI impacts social systems. The six challenges Garibay and the team of researchers identified are:
    • Challenge 1, Human Well-Being: AI should be able to discover the implementation opportunities for it to benefit humans’ well-being. It should also be considerate to support the user’s well-being when interacting with AI.
    • Challenge 2, Responsible: Responsible AI refers to the concept of prioritizing human and societal well-being across the AI lifecycle. This ensures that the potential benefits of AI are leveraged in a manner that aligns with human values and priorities, while also mitigating the risk of unintended consequences or ethical breaches.
    • Challenge 3, Privacy: The collection, use, and dissemination of data in AI systems should be carefully considered to ensure the protection of individuals’ privacy and prevent the harmful use against individuals or groups.
    • Challenge 4, Design: Human-centered design principles for AI systems should use a framework that can inform practitioners. This framework would distinguish between AI with extremely low risk, AI with no special measures needed, AI with extremely high risks, and AI that should not be allowed.
    • Challenge 5, Governance and Oversight: A governance framework that considers the entire AI lifecycle from conception to development to deployment is needed.
    • Challenge 6, Human-AI interaction: To foster an ethical and equitable relationship between humans and AI systems, it is imperative that interactions be predicated upon the fundamental principle of respecting the cognitive capacities of humans. Specifically, humans must maintain complete control over and responsibility for the behavior and outcomes of AI systems.


    The study, which was conducted over 20 months, comprises the views of 26 international experts who have diverse backgrounds in AI technology.
    Gassho, J

    stlah
    Last edited by Jundo; 10-22-2023, 06:46 AM.
    ALL OF LIFE IS OUR TEMPLE
  • Jundo
    Treeleaf Founder and Priest
    • Apr 2006
    • 40361

    #2
    Another warning about AI today, including the risks of "mediocre AI":

    "Tech leaders urge a pause in the 'out-of-control' artificial intelligence race"

    Are tech companies moving too fast in rolling out powerful artificial intelligence technology that could one day outsmart humans?

    That's the conclusion of a group of prominent computer scientists and other tech industry notables such as Elon Musk and Apple co-founder Steve Wozniak who are calling for a 6-month pause to consider the risks.

    Their petition published Wednesday is a response to San Francisco startup OpenAI's recent release of GPT-4, a more advanced successor to its widely used AI chatbot ChatGPT that helped spark a race among tech giants Microsoft and Google to unveil similar applications.

    What do they say?

    The letter warns that AI systems with "human-competitive intelligence can pose profound risks to society and humanity" — from flooding the internet with disinformation and automating away jobs to more catastrophic future risks out of the realms of science fiction.

    It says "recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control."

    "We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4," the letter says. "This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium."

    A number of governments are already working to regulate high-risk AI tools. The United Kingdom released a paper Wednesday outlining its approach, which it said "will avoid heavy-handed legislation which could stifle innovation." Lawmakers in the 27-nation European Union have been negotiating passage of sweeping AI rules.

    ... OpenAI, Microsoft and Google didn't respond to requests for comment Wednesday, but the letter already has plenty of skeptics.

    "A pause is a good idea, but the letter is vague and doesn't take the regulatory problems seriously," says James Grimmelmann, a Cornell University professor of digital and information law. "It is also deeply hypocritical for Elon Musk to sign on given how hard Tesla has fought against accountability for the defective AI in its self-driving cars."

    ... Is this AI hysteria?
    While the letter raises the specter of nefarious AI far more intelligent than what actually exists, it's not "superhuman" AI that some who signed on are worried about. While impressive, a tool such as ChatGPT is simply a text generator that makes predictions about what words would answer the prompt it was given based on what it's learned from ingesting huge troves of written works. Gary Marcus, a New York University professor emeritus who signed the letter, said in a blog post that he disagrees with others who are worried about the near-term prospect of intelligent machines so smart they can self-improve themselves beyond humanity's control. What he's more worried about is "mediocre AI" that's widely deployed, including by criminals or terrorists to trick people or spread dangerous misinformation.

    "Current technology already poses enormous risks that we are ill-prepared for," Marcus wrote. "With future technology, things could well get worse."

    https://www.npr.org/2023/03/29/11668...elligence-race
    Gassho, J

    stlah
    ALL OF LIFE IS OUR TEMPLE

    Comment

    Working...