

Gives me vibes of one of the only portrayals of Texas State law officials I’ve seen - the sheriff from Dukes of Hazzard


Gives me vibes of one of the only portrayals of Texas State law officials I’ve seen - the sheriff from Dukes of Hazzard


Fr poetry doesn’t have to rhyme or even have the same amount of syllables - bro probably wasn’t even a music student


Population seems to have increased and become more diverse. There’s always communities being created. It’s not recognised as a desirable platform for businesses or influencers yet so upvotes aren’t treated anywhere near as divine, but you still see some users with remnants of Reddit; massive psychological damage if they’re downvoted. I mean, makes sense, people generally use social media to feel validated about their opinions. When it comes to comparing with Reddit, Lemmy has no monetised awards or such, bots are mostly rudimentary and live on a couple communities, and there’s little toxicity, harassment etc. because the user has complete control over blocking anything and instance admins have complete control over banning and defederating. I think being able to close some doors is preferable to being wide open to all, and I don’t think it causes any “echo chambers”.
Overall, definite improvement over the years.
I use NextCloud for informal shares as its GUI is very similar Microsoft or Google’s -Drive and is easily adoptable. I also host a private pastebin instance for code or guides I think may be helpful, and Matrix for personal stuff. But I do like how Bitwarden/Vaultwarden’s share works – it feels more secure, like WeTransfer. It still has its applications. And Vaultwarden file share is free, size limit is adjustable in server config, and is not limited to what the Bitwarden clients say!


It was a huge pain and I ended up troubleshooting with Gemini for hours aha! I know, I’ll plant a tree to offset my sins. It was at least useful to rapid search solutions and tell me what component was the most likely issue.
I had coturn set up for legacy Element Classic and, before that, XMPP, but as I wasn’t using those I decided to shut it down and try using Matrix Livekit’s internal TURN server. I’m not sure what actually helped in the end, but Livekit’s latest build caused a bug, so I instead pulled v1.9.12. I also shuffled around my reverse proxy config (from my old attempts) because some endpoints seemed to have changed. I’ll update later with anonymised config :3


Hey, just coming back to see how your setup’s going, and to say I’ve finally managed to get Element Call working for Matrix – I can help you get it running if you like!
I set up a simple sync service with FolderSync (similar to Syncthing) on Android for my family, that preserves their mobile files on a server hosted SMB share. Haven’t even looked at storage encryption though. You can’t underestimate a simple yet effective solution, sometimes so simple it flies under the radar.


I like that. The machine I use to host ytdl-sub is called ourtube
Mad props to the dev for a GUI


It is indeed rather complex.


I think you got a downvote for promoting mailcow, users can be fickle


This is good, I use Mailcow Dockerized and it uses 10% of one 3.7GHz core, but 2GB of RAM. Stalwart definitely seems better for low memory hosts. Seems to have one instance of rspamd for each mailbox, they and ofelia are the biggest users of RAM according to top


Can confirm what another user said, that Intel iGPU would be better in your case.
I’ll let you know now – if it runs Windows kill it. My server was originally Windows running Docker Desktop. It hosted three services: Minecraft server which lagged like a bitch; Samba folder share; and Emby. Whenever Emby playback froze I knew Windows, whose antivirus kept running the HDD under constant load, had fucked the i6 6100 to 100%, which happened at least twice a day.
Moving on, now I run Proxmox. I host 25 services with the CPU at ~35% idle and 24GB RAM at 75%. Nothing lags.
Before I plugged in the GPU my server drew 25W consistently, going to 35W under load. With the GPU, an RTX 3060 11GB (used), it uses 85W idle, so make sure it’s worth it. For my case it not only transcodes for Emby and resumes streaming in a second, but also handles voice inference for Home Assistant in under a second, and mid-sized Ollama LLM responses. Would recommend a high VRAM Nvidia card (for CUDA) in that scenario, as my model Gemma3 7B uses 6GB VRAM and 2GB RAM. But a top model, say Dolphin-Mixtral 22B, needs 80GB storage, 17GB RAM and… Well I don’t have the RAM but you get it. LLMs are intensive.


Cloudflare only just started redirecting to the Archive if a site they protect goes offline. Which I feel is a 200IQ feature.


Here you are :) it’s a Github link (I’m looking into hosting a private pastebin like PrivateBin and will replace this later)


My install must have been broken then 😭 and my experience is from around early 2025, and I didn’t keep it around, so my intel is also dated…


Ooh they upgraded? Yeah my information is based on early 2025 when I tried it aha


Oh, and if you wish, it’s a bit old now but no doubt useful, I have written installation guides on both Prosody and Continuwuity, based on Proxmox containers.
Bluesky that bad huh? I guess it really is just a fancy Twitter clone