Longtermism poses a real threat to humanity

https://www.newstatesman.com/ideas/2023/08/longtermism-threat-humanity

“AI researchers such as Timnit Gebru affirm that longtermism is everywhere in Silicon Valley. The current race to create advanced AI by companies like OpenAI and DeepMind is driven in part by the longtermist ideology. Longtermists believe that if we create a “friendly” AI, it will solve all our problems and usher in a utopia, but if the AI is “misaligned”, it will destroy humanity…”

@technology

  • Silvally@beehaw.org
    link
    fedilink
    arrow-up
    35
    ·
    edit-2
    1 year ago

    I’m not an expert on longtermarism but I have read William McAskills “What we owe the future”.

    The quote highlighted about AI is a clear demonstration on how longtermarism is being misappropriated. It describes a “rush” when the impression I got from McAskills book is that AI ethics needs to be very carefully discussed and that rushing into it is the opposite of what longtermists should do. To try and describe it briefly, this is because AI presents the risk of what McAskill describes as “value lock-in” where our current society’s values will continue long into the future, or the values that will persevere into the future will be decided by the few people who create the first generative AI.

    In reality, people like Musk probably see AI as a means to push what they believe are the correct moral values long into the future. Which is terrifying…

    This is why AI ethics is extremely important. We are already seeing institutional prejudices our current society possesses being perpetuated by AI, such as racism, sexism, etc. This is why when I saw that Microsoft/OpenAI was scrapping it’s AI ethics team that I was absolutely horrified…

    The “rush” to produce AI is a problem with Capitalism, not longtermarism. There is a rush to create the first generative AI not because it will benefit society but because it will make buttloads of money.

    • frog 🐸@beehaw.org
      link
      fedilink
      arrow-up
      9
      ·
      1 year ago

      I’m inclined to agree. Even without having read McAskill’s book (though I’m interested to read it now), the way we’re approaching AI right now does seem more of a short-term gold rush, wrapped up in friendly sounding “this will lead to utopia in the future” to justify “we’re going to make a lot of money right now”.

    • jarfil@beehaw.org
      link
      fedilink
      arrow-up
      14
      ·
      1 year ago

      The “rush” to produce AI is a problem with Capitalism, not longtermarism. There is a rush to create the first generative AI not because it will benefit society but because it will make buttloads of money.

      This. I came here to say right this.