• 0 Posts
  • 30 Comments
Joined 1 month ago
cake
Cake day: February 19th, 2026

help-circle



  • Population seems to have increased and become more diverse. There’s always communities being created. It’s not recognised as a desirable platform for businesses or influencers yet so upvotes aren’t treated anywhere near as divine, but you still see some users with remnants of Reddit; massive psychological damage if they’re downvoted. I mean, makes sense, people generally use social media to feel validated about their opinions. When it comes to comparing with Reddit, Lemmy has no monetised awards or such, bots are mostly rudimentary and live on a couple communities, and there’s little toxicity, harassment etc. because the user has complete control over blocking anything and instance admins have complete control over banning and defederating. I think being able to close some doors is preferable to being wide open to all, and I don’t think it causes any “echo chambers”.

    Overall, definite improvement over the years.


  • I use NextCloud for informal shares as its GUI is very similar Microsoft or Google’s -Drive and is easily adoptable. I also host a private pastebin instance for code or guides I think may be helpful, and Matrix for personal stuff. But I do like how Bitwarden/Vaultwarden’s share works – it feels more secure, like WeTransfer. It still has its applications. And Vaultwarden file share is free, size limit is adjustable in server config, and is not limited to what the Bitwarden clients say!


  • It was a huge pain and I ended up troubleshooting with Gemini for hours aha! I know, I’ll plant a tree to offset my sins. It was at least useful to rapid search solutions and tell me what component was the most likely issue.

    I had coturn set up for legacy Element Classic and, before that, XMPP, but as I wasn’t using those I decided to shut it down and try using Matrix Livekit’s internal TURN server. I’m not sure what actually helped in the end, but Livekit’s latest build caused a bug, so I instead pulled v1.9.12. I also shuffled around my reverse proxy config (from my old attempts) because some endpoints seemed to have changed. I’ll update later with anonymised config :3










  • Can confirm what another user said, that Intel iGPU would be better in your case.

    I’ll let you know now – if it runs Windows kill it. My server was originally Windows running Docker Desktop. It hosted three services: Minecraft server which lagged like a bitch; Samba folder share; and Emby. Whenever Emby playback froze I knew Windows, whose antivirus kept running the HDD under constant load, had fucked the i6 6100 to 100%, which happened at least twice a day.

    Moving on, now I run Proxmox. I host 25 services with the CPU at ~35% idle and 24GB RAM at 75%. Nothing lags.

    Before I plugged in the GPU my server drew 25W consistently, going to 35W under load. With the GPU, an RTX 3060 11GB (used), it uses 85W idle, so make sure it’s worth it. For my case it not only transcodes for Emby and resumes streaming in a second, but also handles voice inference for Home Assistant in under a second, and mid-sized Ollama LLM responses. Would recommend a high VRAM Nvidia card (for CUDA) in that scenario, as my model Gemma3 7B uses 6GB VRAM and 2GB RAM. But a top model, say Dolphin-Mixtral 22B, needs 80GB storage, 17GB RAM and… Well I don’t have the RAM but you get it. LLMs are intensive.