• ssfckdt@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    15
    ·
    13 hours ago

    Honestly, people are probably going to start using it much less once they put in ads on the the low and free tiers.

    They’re realizing they can’t turn profit over expenses when the VC money dries up without jacking price.

    Same shit different fad. Offshoring was all the rage until the demand meant they started charging more, the quality went down, and the reality of cross-timezone management became obvious. Cloud was all the rage until the prices went up from demand and also kept running into outages, and now some companies are “re-inhousing” their infrastructure.

    Cant’ wait till they realize they also can’t deliver at regular clip with only half the workers. Wont’ be soon enough, alas.

  • ruvanoit@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    13 hours ago

    The point is we can’t compete with this saas ai products with our local models that runs on our poor homeservers. Affording a subscription is way more cheaper in shorter term.

    • myfunnyaccountname@lemmy.zip
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 hour ago

      All the time at work. Free tier and paid. Translate shit from foreign customers. Extract text from screen shots. There’s lots of uses, just like any other tool. But it’s a tool, it’s not fucking magic or the devil. Just a tool.

      • sfgifz@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        18 hours ago

        Well the fucked up part is businesses also want to see how well their “investment” AI is doing, which means tracking how much employees use the AI and other cooked up metrics. You can’t get away from it no matter how irritating it makes your job and life.

        • bridgeenjoyer@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          3
          ·
          7 hours ago

          Its infuriating because now every single meeting is “BLARGH HOW CAN WE MAKE DA AI DO THIS? HOW CAN WE UTILIZE THE AI???!!!”

          makes me want to scream.

  • Jankatarch@lemmy.world
    link
    fedilink
    English
    arrow-up
    30
    arrow-down
    1
    ·
    edit-2
    1 day ago

    Educational institutions are buying microsoft services as a trend and we need them to boycott also.

    Getting bus pass in my university takes 5 business days so they can “save cost on students who don’t use it.”

    Meanwhile everyone’s student gmail implicitly has a subscription to copilot, gemini, and openai at the same time; it doesn’t matter if you are using them or not.

    Paid by our tuition no doubt.

  • Suavevillain@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    ·
    1 day ago

    I don’t pay for AI. There is no real value in it at massive scale for it to be paid for me. Plus it is harmful. I can local host AI for small tasks like cleaning up some data.

    • Scrollone@feddit.it
      link
      fedilink
      English
      arrow-up
      3
      ·
      14 hours ago

      Also, why paying for AI? You can use it for free, and when you finish your free daily limit, you just switch to another AI: Gemini, Perplexity, Claude, Le Chat, DeepSeek… rinse and repeat.

    • Gatsby@lemmy.zip
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 day ago

      Yep! I feel way, way safer when I use something running on my laptop as opposed to feeding things into the bottomless hole that is Microsoft or Anthropic.

      All I want is something to double-check the verbiage of an email to make sure I’m not coming off like an asshole.

      • FauxLiving@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        1 day ago

        Unless you’re (not you PhoenixDog specifically, the general You referring to the reader) blocking your browser fingerprinting, using Ad blocker, DNS filtering from trusted sources, not using social media app, not installing apps that require Google Play Services and not using your credit card for digital payments then simply changing the IP address that the server sees doesn’t do very much.

        VPNs can be part of a solution but if you are not doing all of the other things then they do not hide who you are because of the plethora of other avenues that you can be tied to an existing advertising identity.

      • FlashMobOfOne@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 day ago

        Yeah, VPN’s are unfortunately essential. If I hadn’t made it a project to request deletion of my info off the top 30 background check sites, I’d be using something like DeleteMe too.

  • humanspiral@lemmy.ca
    link
    fedilink
    English
    arrow-up
    20
    ·
    1 day ago

    Grok/Mechahitler owner likely has donated more to GOP/Trump rule. Anthropic CEO wrote an essay that outlined dangers of AI recently, but also underlined how the pure good US must develop Skynet to oppress all of the bad evil opposition to the US empire.

    The entire industry is lining up to get US government military/surveillance applications to use the absurd level of datacenters/energy that are being committed to, because “Skynet now or China wins” is even more aggressive consensus than “too big to fail”.

    • Jeremyward@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      1 day ago

      Yes agree, i have quit MSFT fully (switched to Ubuntu), Amazon prime is gone and no longer buying from them. Cancelled my Claude subscription, my gmail now auto forwards to my local server, switched my web hosting from AWS to Hetzner, Cancelled my netflix, back to library and jellyfin local server. Same with Spotify. No more Safeway or QFC - Costco is good enough and has the lowest prices locally, or occasionally Trader Joes. Its saving me a whole shit ton of money.

  • canofcam@lemmy.world
    link
    fedilink
    English
    arrow-up
    49
    ·
    2 days ago

    not just chatgpt.

    Stop using Amazon, Meta products, Netflix, Spotify, etc.

    All of these massive corporations are actively making life worse for people, and it will only get worse and worse as people continue to stay subscribed.

    The only option is to log off and find an alternative.

    • XLE@piefed.social
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 day ago

      Apparently, they just took somebody else’s idea and made it all about ChatGPT:

      QuitGPT is in the mold of Galloway’s own recently launched campaign, Resist and Unsubscribe.

      Resist and Unsubscribe encourages you to get away from Google and Amazon, as well as OpenAI. QuitGPT endorses both (Google directly, Amazon indirectly).

      Resist and Unsubscribe is a holistic project; QuitGPT cuts out everything except for one product.

      Resist and Unsubscribe doesn’t inadvertently promote a single product; QuitGPT practically functions as a sneaky advertising campaign (kind of like how Larry David said he wouldn’t invest in FTX).

      It seems pretty clear which project is better, even if I don’t agree with everything the guy behind it said:

      Galloway argued that the best way to stop ICE was to persuade people to cancel their ChatGPT subscriptions.

      I don’t think voting with your dollars will make a huge difference when every ChatGPT subscription costs OpenAI money, but go off I guess

  • unspeakable_horror@thelemmy.club
    link
    fedilink
    English
    arrow-up
    56
    arrow-down
    2
    ·
    2 days ago

    Off with their heads! GO self-hosted, go local… toss the rest in the trash can before this crap gets a foothold and fully enshitifies

    • mushroommunk@lemmy.today
      link
      fedilink
      English
      arrow-up
      31
      arrow-down
      11
      ·
      2 days ago

      LLMs are already shit. Going local is still burning the world just to run a glorified text production machine

      • suspicious_hyperlink@lemmy.today
        link
        fedilink
        English
        arrow-up
        18
        arrow-down
        34
        ·
        2 days ago

        Having just finished getting an entire front end for my website, I disagree. A few years ago I would offshore this job to some third-world country devs. Now, AI can do the same thing, for cents, without having to wait for a few days for the initial results and another day or two for each revision needed

        • mushroommunk@lemmy.today
          link
          fedilink
          English
          arrow-up
          45
          arrow-down
          12
          ·
          2 days ago

          The fact you see nothing wrong with anything you said really speaks volumes to the inhumanity inherent with using “AI”.

          • suspicious_hyperlink@lemmy.today
            link
            fedilink
            English
            arrow-up
            12
            arrow-down
            27
            ·
            edit-2
            2 days ago

            Please enlighten me. I am working on systems solving real-world issues, and now I can ship my solutions faster, with lower costs. Sounds like a win-win for everyone involved except for the offshore employees that have to look for new gigs now

            Edit: I would actually rather read a reply than just see you downvoting. The point is, what you call a “glorified text generating machine”, has actual use cases

            • wholookshere@piefed.blahaj.zone
              link
              fedilink
              English
              arrow-up
              17
              arrow-down
              2
              ·
              1 day ago

              you missed the plot a little.

              it’s not that it doesn’t have use cases. It burning the world down with it.

              how much more water went down the drain cooling the requests you made? how about the electricity not going to local consumers but AI data centers.

              all the computer components shortages…

              that’s still before the fact you admitted you would have hired a human, and given them food on the table instead of a corporate giant to buy another mega yacht.

              • suspicious_hyperlink@lemmy.today
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                6
                ·
                1 day ago

                Regarding the electricity not going to local customers: it’s not my fault that your country does not have appropriate regulations. None of the data centers are located where I live anyways.

                Regarding hiring a human: I 100% would hire a human if there was no AI, you’re right, I’m not trying to hide it. Sucks for them I guess but I don’t see a reason why should I keep using their services if I can get a cheaper and arguably better alternative now - I’m trying to make money, not run a charity supporting development of 3rd world countries with authoritarian regimes.

                However, I am pretty sure that the companies which I’m paying are currently operating on a loss with my $3/month and free coding plans. They want to grow the customer base and I’ll just switch once they will start wanting to make a profit, like the Chinese ZAI just did this week, hiking up their prices by over 3x. They are operating on a loss with my <3$/month contribution and not buying another mega yacht yet getting further away from one.

                • wholookshere@piefed.blahaj.zone
                  link
                  fedilink
                  English
                  arrow-up
                  6
                  arrow-down
                  2
                  ·
                  1 day ago

                  “well it doesn’t effect me directly other than my bottom line, so sucks for everyone else”

                  if all you have to say is that, have the day you deserve.

            • kutt@lemmy.world
              link
              fedilink
              English
              arrow-up
              20
              arrow-down
              2
              ·
              2 days ago

              I don’t know if it’s your fault honestly. It’s the system that makes you want to offshore your work to developing countries and not hire local employees. I get it. It’s cheaper. But when even independent developers start doing this we have reached post-late stage capitalism at this point

            • mushroommunk@lemmy.today
              link
              fedilink
              English
              arrow-up
              5
              ·
              1 day ago

              Looks like others have come along and made my point for me while I slept. Except for calling out the dehumanizing language against the developers. They missed that one.

              • suspicious_hyperlink@lemmy.today
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                2
                ·
                1 day ago

                They weren’t my full time employees but one time gigs. I’ve worked with tons of freelancers over the years, I don’t have any special relationship with them. It’s was just a matter of whoever would offer the lowest price for the gig each time

          • VieuxQueb@lemmy.ca
            link
            fedilink
            English
            arrow-up
            7
            arrow-down
            1
            ·
            1 day ago

            Yeah and I wonder what are those real world products solving real world issues he talks about.

            • mushroommunk@lemmy.today
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 day ago

              Yeah. It’s not hard to say a sentence about the problem space. Heck, in good faith I’ll say what I’ve been working on to start. I’m currently developing tools to help small communities teach their native language to younger generations as existing programs have stopped support for them.

          • suspicious_hyperlink@lemmy.today
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 day ago

            It’s behind a login page so I’m afraid you wouldn’t see much. Also, it was never supposed to be glorious (it was not before the LLMs neither), it’s a matter of just having some form of UI as a necessity. I would be hiring actual designers if it was supposed to be a landing page or sth where the looks matter, not stick with the AI

    • sudoer777@lemmy.ml
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 day ago

      GO self-hosted, go local

      Last I checked it costed ~$6000 to run a high end LLM at terrible speeds and that was before the RAM price hike. And at the rate things are changing it might be obsolete in a few year. And I don’t have that much money either.

      I’m going to stick with the free OpenCode Zen models for now and maybe switch to OpenCode Black or Synthetic or whatever when they stop being available and use the open weight models there for now.

    • ch00f@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      2
      ·
      2 days ago

      GO self-hosted,

      So yours and another comment I saw today got me to dust off an old docker container I was playing with a few months ago to run deepseek-r1:8b on my server’s Intel A750 GPU with 8gb of VRAM. Not exactly top-of-the-line, but not bad.

      I knew it would be slow and not as good as ChatGPT or whatever which I guess I can live with. I did ask it to write some example Rust code today which I hadn’t even thought to try and it worked.

      But I also asked it to describe the characters in a popular TV show, and it got a ton of details wrong.

      8b is the highest number of parameters I can run on my card. How do you propose someone in my situation run an LLM locally? Can you suggest some better models?

        • ch00f@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 day ago

          Well, not off to a great start.

          To be clear, I think getting an LLM to run locally at all is super cool, but saying “go self hosted” sort of gloms over the fact that getting a local LLM to do anything close to what ChatGPT can do is a very expensive hobby.

          • lexiw@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 day ago

            I agree, it is a very expensive hobby, and it gets decent in the range 30-80b. However, the model you are using should not perform that bad, it seems that you might be hitting a config issue. Would you mind sharing the cli command you use to run it?

            • ch00f@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              15 hours ago

              Thanks for taking the time.

              So I’m not using a CLI. I’ve got the intelanalytics/ipex-llm-inference-cpp-xpu image running and hosting LLMs to be used by a separate open-webui container. I originally set it up with Deepseek-R1:latest per the tutorial to get the results above. This was straight out of the box with no tweaks.

              The interface offers some controls settings (below screenshot). Is that what you’re talking about?

              • lexiw@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                14 hours ago

                Those values are most of what I was looking for. An LLM is just predicting the next token (for simplicity, a word). It does so by generating every possible word with a probability associated with it, and then picking a random word from this list, influenced by its probability. So for the sentence “a cat sat” it might generate “on: 0.6”, “down: 0.2”, and so on. 0.6 just means 60%, and all the values add up to 1 (100%). Now, the number of tokens generated can be as big as the context, so you might want to pick randomly from the top 10, you control this with the parameter top_k, or you might want to discard all the words below 20%, you control this with min_p. And finally, in cases where you have a token with a big probability followed by tokens with very low probability, you might want to squash these probabilities to be closer together, by decreasing the higher tokens and increasing the lower tokens. You control this with the temperature parameter where 0.1 is very little squashing, and 1 a lot of it. In layman terms this is the amount of creativity of your model. 0 is none, 1 is a lot, 2 is mentally insane.

                Now, without knowing your hardware or why you need docker, it is hard to suggest a tool to run LLMs. I am not familiar with what you are using, but it seems to not be maintained, and likely lacks the features needed for a modern LLM to work properly. For consumer grade hardware and personal use, the best tool these days is llamacpp, usually through a newbie friendly wrapper like LMStudio which support other backends as well and provide so much more than just a UI to download and run models. My advice is to download it and start there (it will download the right backend for you, so no need to install anything else manually).

                • ch00f@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  5 hours ago

                  I’ll give that a shot.

                  I’m running it in docker because it’s running on a headless server with a boatload of other services. Ideally whatever I use will be accessible over the network.

                  I think at the time I started, not everything supported Intel cards, but it looks like llama-cli has support form Intel GPUs. I’ll give it a shot. Thanks!

        • ch00f@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 days ago

          Any suggestions on how to get these to gguf format? I found a GitHub project that claims to convert, but wondering if there’s a more direct way.

      • Mika@piefed.ca
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 days ago

        It goes down to number of vram / unified ram you have. There is no magic to make 8b perform like top tier subscription based LLMs (likely in 500b+ range, wouldn’t be surprised if trillions).

        If you can get to 32b / 80b models, that’s where magic starts to happen.

      • Sir. Haxalot@nord.pub
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        2 days ago

        Honestly you pretty much don’t. Llama are insanely expensive to run as most of the model improvements will come from simply growing the model. It’s not realistic to run LLMs locally and compete with the hosted ones, it pretty much requires the economics of scale. Even if you invest in a 5090 you’re going to be behind the purpose made GPUs with 80GB VRAM.

        Maybe it could work for some use cases but I rather just don’t use AI.

    • CosmoNova@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      4
      ·
      2 days ago

      Going local is taxing on your hardware that is extremely expensive to replace. Hell, it could soon become almost impossible to replace. I genuinely don‘t recommend it.

      Even if you HAVE to use LLMs for some reason, there are free alternatives right now that let Silicon Valley bleed money and they‘re quickly running out of it.

      Cancelling any paid subscription probably hurts them more than anything else.

      • Mika@piefed.ca
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        2
        ·
        edit-2
        2 days ago

        If LLM is tied to making you productive, going local is about owning and controlling the means of production.

        You aren’t supposed to run it on machine you work on anyway, do a server and send requests.

      • Hubi@feddit.org
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        2 days ago

        It’s not really taxing on your hardware unless you load and unload huge models all day or if your cooling is insufficient.

    • ZILtoid1991@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      3
      ·
      2 days ago

      I would, if I found even a remotely good usecase for LLMs. Would be useful for contextual search on a bunch of API documentation and books on algorithms, but I don’t want a sychophantic “copilot” or “assistant”, that does job so bad I would be fired for, all while being called ableist slurs and getting blacklisted from the industry.

      • unspeakable_horror@thelemmy.club
        link
        fedilink
        English
        arrow-up
        1
        ·
        14 hours ago

        I only have a need for RAG and it works well, wouldn’t say I need w copilot either. Everyone probably has some use case for this stuff but it’s gonna be different for everyone, the basic llm conversation model it’s really just a stepped up interactive Google