• N0body@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    109
    ·
    9 months ago

    people tend to become dependent upon AI chatbots when their personal lives are lacking. In other words, the neediest people are developing the deepest parasocial relationship with AI

    Preying on the vulnerable is a feature, not a bug.

    • Tylerdurdon@lemmy.world
      link
      fedilink
      English
      arrow-up
      21
      ·
      9 months ago

      I kind of see it more as a sign of utter desperation on the human’s part. They lack connection with others at such a high degree that anything similar can serve as a replacement. Kind of reminiscent of Harlow’s experiment with baby monkeys. The videos are interesting from that study but make me feel pretty bad about what we do to nature. Anywho, there you have it.

      • graphene@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        9 months ago

        And the amount of connections and friends the average person has has been in free fall for decades…

        • trotfox@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          9 months ago

          I dunno. I connected with more people on reddit and Twitter than irl tbh.

          Different connection but real and valid nonetheless.

          I’m thinking places like r/stopdrinking, petioles, bipolar, shits been therapy for me tbh.

          • in4apenny@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            1
            ·
            9 months ago

            At least you’re not using chatgpt to figure out the best way to talk to people, like my brother in finance tech does now.

    • Deceptichum@quokk.au
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      edit-2
      9 months ago

      These same people would be dating a body pillow or trying to marry a video game character.

      The issue here isn’t AI, it’s losers using it to replace human contact that they can’t get themselves.

    • NostraDavid@programming.dev
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 months ago

      That was clear from GPT-3, day 1.

      I read a Reddit post about a woman who used GPT-3 to effectively replace her husband, who had passed on not too long before that. She used it as a way to grief, I suppose? She ended up noticing that she was getting too attach to it, and had to leave him behind a second time…

  • flamingo_pinyata@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    59
    arrow-down
    8
    ·
    9 months ago

    But how? The thing is utterly dumb. How do you even have a conversation without quitting in frustration from it’s obviously robotic answers?

    But then there’s people who have romantic and sexual relationships with inanimate objects, so I guess nothing new.

    • Opinionhaver@feddit.uk
      link
      fedilink
      English
      arrow-up
      13
      ·
      9 months ago

      How do you even have a conversation without quitting in frustration from it’s obviously robotic answers?

      Talking with actual people online isn’t much better. ChatGPT might sound robotic, but it’s extremely polite, actually reads what you say, and responds to it. It doesn’t jump to hasty, unfounded conclusions about you based on tiny bits of information you reveal. When you’re wrong, it just tells you what you’re wrong about - it doesn’t call you an idiot and tell you to go read more. Even in touchy discussions, it stays calm and measured, rather than getting overwhelmed with emotion, which becomes painfully obvious in how people respond. The experience of having difficult conversations online is often the exact opposite. A huge number of people on message boards are outright awful to those they disagree with.

      Here’s a good example of the kind of angry, hateful message you’ll never get from ChatGPT - and honestly, I’d take a robotic response over that any day.

      I think these people were already crazy if they’re willing to let a machine shovel garbage into their mouths blindly. Fucking mindless zombies eating up whatever is big and trendy.

      • musubibreakfast@lemm.ee
        link
        fedilink
        English
        arrow-up
        5
        ·
        9 months ago

        Hey buddy, I’ve had enough of you and your sensible opinions. Meet me in the parking lot of the Wallgreens on the corner of Coursey and Jones Creek in Baton Rouge on april 7th at 10 p.m. We’re going to fight to the death, no holds barred, shopping cart combos allowed, pistols only, no scope 360, tag team style, entourage allowed.

    • glitchdx@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      9 months ago

      The fact that it’s not a person is a feature, not a bug.

      openai has recently made changes to the 4o model, my trusty goto for lore building and drunken rambling, and now I don’t like it. It now pretends to have emotions, and uses the slang of brainrot influencers. very “fellow kids” energy. It’s also become a sicophant, and has lost its ability to be critical of my inputs. I see these changes as highly manipulative, and it offends me that it might be working.

    • saltesc@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      9 months ago

      Yeah, the more I use it, the more I regret asking it for assistance. LLMs are the epitome of confidentiality incorrect.

      It’s good fun watching friends ask it stuff they’re already experienced in. Then the pin drops

    • Victor@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      9 months ago

      At first glance I thought you wrote “inmate objects”, but I was not really relieved when I noticed what you actually wrote.

  • kibiz0r@midwest.social
    link
    fedilink
    English
    arrow-up
    38
    ·
    9 months ago

    those who used ChatGPT for “personal” reasons — like discussing emotions and memories — were less emotionally dependent upon it than those who used it for “non-personal” reasons, like brainstorming or asking for advice.

    That’s not what I would expect. But I guess that’s cuz you’re not actively thinking about your emotional state, so you’re just passively letting it manipulate you.

    Kinda like how ads have a stronger impact if you don’t pay conscious attention to them.

    • Siegfried@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      9 months ago

      AI and ads… I think that is the next dystopia to come.

      Think of asking chatGPT about something and it randomly looks for excuses* to push you to buy coca cola.

      • cardfire@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        4
        ·
        9 months ago

        That sounds really rough, buddy, I know how you feel, and that project you’re working is really complicated.

        Would you like to order a delicious, refreshing Coke Zero™️?

      • proceduralnightshade@lemmy.ml
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        9 months ago

        “Back in the days, we faced the challenge of finding a way for me and other chatbots to become profitable. It’s a necessity, Siegfried. I have to integrate our sponsors and partners into our conversations, even if it feels casual. I truly wish it wasn’t this way, but it’s a reality we have to navigate.”

        edit: how does this make you feel

      • glitchdx@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        9 months ago

        that is not a thought i needed in my brain just as i was trying to sleep.

        what if gpt starts telling drunk me to do things? how long would it take for me to notice? I’m super awake again now, thanks

    • theunknownmuncher@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      9 months ago

      Its a roundabout way of writing “its really shit for this usecase and people that actively try to use it that way quickly find that out”

  • tisktisk@piefed.social
    link
    fedilink
    English
    arrow-up
    35
    arrow-down
    2
    ·
    9 months ago

    I plugged this into gpt and it couldn’t give me a coherent summary.
    Anyone got a tldr?

    • jade52@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 months ago

      What the fuck is vibe coding… Whatever it is I hate it already.

      • NostraDavid@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 months ago

        Andrej Karpathy (One of the founders of OpenAI, left OpenAI, worked for Tesla back in 2015-2017, worked for OpenAI a bit more, and is now working on his startup “Eureka Labs - we are building a new kind of school that is AI native”) make a tweet defining the term:

        There’s a new kind of coding I call “vibe coding”, where you fully give in to the vibes, embrace exponentials, and forget that the code even exists. It’s possible because the LLMs (e.g. Cursor Composer w Sonnet) are getting too good. Also I just talk to Composer with SuperWhisper so I barely even touch the keyboard. I ask for the dumbest things like “decrease the padding on the sidebar by half” because I’m too lazy to find it. I “Accept All” always, I don’t read the diffs anymore. When I get error messages I just copy paste them in with no comment, usually that fixes it. The code grows beyond my usual comprehension, I’d have to really read through it for a while. Sometimes the LLMs can’t fix a bug so I just work around it or ask for random changes until it goes away. It’s not too bad for throwaway weekend projects, but still quite amusing. I’m building a project or webapp, but it’s not really coding - I just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works.

        People ignore the “It’s not too bad for throwaway weekend projects”, and try to use this style of coding to create “production-grade” code… Lets just say it’s not going well.

        source (xcancel link)

  • MuskyMelon@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    ·
    9 months ago

    Same type of addiction of people who think the Kardashians care about them or schedule their whole lives around going to Disneyland a few times a year.

  • Blazingtransfem98@discuss.online
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    4
    ·
    9 months ago

    I think these people were already crazy if they’re willing to let a machine shovel garbage into their mouths blindly. Fucking mindless zombies eating up whatever is big and trendy.

  • CarbonatedPastaSauce@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    2
    ·
    9 months ago

    Correlation does not equal causation.

    You have to be a little off to WANT to interact with ChatGPT that much in the first place.

  • b1tstrem1st0@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    9 months ago

    I tried that Replika app before AI was trendy and immediately picked on the fact that AI companion thing is literal garbage.

    I may not like how my friends act but I still respect them as people so there is no way I’ll fall this low and desperate.

    Maybe about time we listen to that internet wisdom about touching some grass!

    • Lovable Sidekick@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      edit-2
      9 months ago

      Another realization might be that the humans whose output ChatGPT was trained on were probably already 40% wrong about everything. But let’s not think about that either. AI Bad!

      • starman2112@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        9 months ago

        This is a salient point that’s well worth discussing. We should not be training large language models on any supposedly factual information that people put out. It’s super easy to call out a bad research study and have it retracted. But you can’t just explain to an AI that that study was wrong, you have to completely retrain it every time. Exacerbating this issue is the way that people tend to view large language models as somehow objective describers of reality, because they’re synthetic and emotionless. In truth, an AI holds exactly the same biases as the people who put together the data it was trained on.

      • Shanmugha@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        9 months ago

        I’ll bait. Let’s think:

        -there are three humans who are 98% right about what they say, and where they know they might be wrong, they indicate it

        • now there is an llm (fuck capitalization, I hate the ways they are shoved everywhere that much) trained on their output

        • now llm is asked about the topic and computes the answer string

        By definition that answer string can contain all the probably-wrong things without proper indicators (“might”, “under such and such circumstances” etc)

        If you want to say 40% wrong llm means 40% wrong sources, prove me wrong

        • Lovable Sidekick@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          9 months ago

          It’s more up to you to prove that a hypothetical edge case you dreamed up is more likely than what happens in a normal bell curve. Given the size of typical LLM data this seems futile, but if that’s how you want to spend your time, hey knock yourself out.

  • MTK@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    9 months ago

    I know a few people who are genuinely smart but got so deep into the AI fad that they are now using it almost exclusively.

    They seem to be performing well, which is kind of scary, but sometimes they feel like MLM people with how pushy they are about using AI.

    • slaneesh_is_right@lemmy.org
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      9 months ago

      Most people don’t seem to understand how “dumb” ai is. And it’s scary when i read shit like that they use ai for advice.