Employees say they weren’t adequately warned about the brutality of some of the text and images they would be tasked with reviewing, and were offered no or inadequate psychological support. Workers were paid between $1.46 and $3.74 an hour, according to a Sama spokesperson.

  • Kwakigra@beehaw.org
    link
    fedilink
    arrow-up
    8
    ·
    1 year ago

    The difference between what a human mind does in transforming their nature and experiences through artistic expression and what the machine does by referencing values and expressing them in human language without any kind of understanding is very different. You are right that LLMs don’t literally copy word for word what they find, and they certainly are sophisticated pieces of technology, but what they are expressing is more processed language or images than an act of artistic creation. Less culinary experience and more industrial sausage. They do not have intelligence and are incapable of producing art of any kind. This isn’t to say they aren’t a threat to commodified art in the marketplace because they very much are, but in terms of enrichment or even entertainment the machine is not capable of producing anything worthwhile unless the viewer is looking for something they don’t have to look at for more than a moment or read with any serious interest of the contents. I’m interested in people using LLMs as a tool in their own artistic pursuits, but they have their own limitations as any tool does.

    • Scrithwire@lemmy.one
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      Give the AI a body with sense inputs, and allow those sense inputs to transform the “decider” value. That’s a step in the direction of true creativity

      • Kwakigra@beehaw.org
        link
        fedilink
        arrow-up
        6
        ·
        edit-2
        1 year ago

        A step closer to approximating the intelligence of a worm, perhaps. I once looked into where the line is on which anamalia were capable of operant conditioning, which I hypothesize may be the first purpose of a brain, and the line on our present taxonomic hierarchy is among worms (jellyfish do not have sufficient faculties for operant conditioning and are on the other side of the line). Sensory input being associated with decider values is still not as sophisticated as learning to be attracted to beneficial things and avoiding dangerous things because the machine does not have needs or desires to base its reactions on which would have to be trained into it by those with intelligence. I’m not saying it’s impossible to artificially create a being like this, but in my estimation we are very far from it considering that we barely grasp how any brain works other than to be aware of their extreme complexity. Considering the degree of difference between a worm and a sentient human, we are much further from what we would consider a human level of intelligence.

        Edit: Re-reading this it seems much more snippy than I intended and I’m not sure how to frame it to sound more neutral. I meant this as a neutral continuation of a discussion of an idea.