

Deleted / Wrong thread.
Freedom is the right to tell people what they do not want to hear.
Deleted / Wrong thread.
It was supposed to be a joke. They live around 2 to 4 years.
I’ve had the same gerbil for almost 30 years. I doubt I’d notice if someone swapped it into identical colored one in the middle of the night.
Judging by the comments here I’m getting the impression that people would like to rather provide a selfie or ID.
No other than it’s geographically closer to my actual location so I thought the speed would be faster.
EU is about to do the exact same thing. Norway is the place to be. That’s where I went - at least according to my ip address.
FUD has nothing to do with what this is about.
And nothing of value was lost.
Sure, if privacy is worth nothing to you but I wouldn’t speak for the rest of the UK and EU.
My feed right now.
No disagreement there. While it’s possible that Trump himself might not be - but also might be - guilty of any wrongdoing in this particular case, he sure acts like someone who is. And if he’s not protecting himself, then he’s protecting other powerful people around him who may have dirt on him, which they can use as leverage to stop him from throwing them under the bus without taking himself down in the process.
But that’s a bit beside the point. My original argument was about refraining from accusing him of being a child rapist on insufficient evidence, no matter how much it might serve someone’s political agenda or how satisfying it might feel to finally see him face consequences. If there’s undeniable proof that he is guilty of what he’s being accused of here, then by all means he should be prosecuted. But I’m advocating for due process. These are extremely serious accusations that should not be spread as facts when there’s no way to know - no matter who we’re talking about.
It’s actually the opposite of a very specific definition - it’s an extremely broad one. “AI” is the parent category that contains all the different subcategories, from the chess opponent on an old Atari console all the way up to a hypothetical Artificial Superintelligence, even though those systems couldn’t be more different from one another.
It’s a system designed to generate natural-sounding language, not to provide factual information. Complaining that it sometimes gets facts wrong is like saying a calculator is “stupid” because it can’t write text. How could it? That was never what it was built for. You’re expecting general intelligence from a narrowly intelligent system. That’s not a failure on the LLM’s part - it’s a failure of your expectations.
I don’t think you even know what you’re talking about.
You can define intelligence however you like, but if you come into a discussion using your own private definitions, all you get is people talking past each other and thinking they’re disagreeing when they’re not. Terms like this have a technical meaning for a reason. Sure, you can simplify things in a one-on-one conversation with someone who doesn’t know the jargon - but dragging those made-up definitions into an online discussion just muddies the water.
The correct term here is “AI,” and it doesn’t somehow skip over the word “artificial.” What exactly do you think AI stands for? The fact that normies don’t understand what AI actually means and assume it implies general intelligence doesn’t suddenly make LLMs “not AI” - it just means normies don’t know what they’re talking about either.
And for the record, the term is Artificial General Intelligence (AGI), not GAI.
Claims like this just create more confusion and lead to people saying things like “LLMs aren’t AI.”
LLMs are intelligent - just not in the way people think.
Their intelligence lies in their ability to generate natural-sounding language, and at that they’re extremely good. Expecting them to consistently output factual information isn’t a failure of the LLM - it’s a failure of the user’s expectations. LLMs are so good at generating text, and so often happen to be correct, that people start expecting general intelligence from them. But that’s never what they were designed to do.
There are plenty of similarities in the output of both the human brain and LLMs, but overall they’re very different. Unlike LLMs, the human brain is generally intelligent - it can adapt to a huge variety of cognitive tasks. LLMs, on the other hand, can only do one thing: generate language. It’s tempting to anthropomorphize systems like ChatGPT because of how competent they seem, but there’s no actual thinking going on. It’s just generating language based on patterns and probabilities.
Large language models aren’t designed to be knowledge machines - they’re designed to generate natural-sounding language, nothing more. The fact that they ever get things right is just a byproduct of their training data containing a lot of correct information. These systems aren’t generally intelligent, and people need to stop treating them as if they are. Complaining that an LLM gives out wrong information isn’t a failure of the model itself - it’s a mismatch of expectations.
Trust what? I’m simply pointing out that we don’t know whether he’s actually done anything illegal or not. A lot of people seem convinced that he did - which they couldn’t possibly be certain of - or they’re hoping he did, which is a pretty awful thing to hope for when you actually stop and think about the implications. And then there are those who don’t even care whether he did anything or not, they just want him convicted anyway - which is equally insane.
Also, being “on the list” is not the same thing as being a child rapist. We don’t even know what this list really is or why certain people are on it. Anyone connected to Epstein in any capacity would dread having that list released, regardless of the reason they’re on it, because the result would be total destruction of their reputation.
You’re just making my point for me.
And I’m not talking about “being on a list” but about being a child rapist. Two wildly different things.
if she comes out and says that the president was… not involved in any criminal activity.
Which, lets not forget, may very well be true. Wishing it to not be true is wishing that a child got raped only because the consequences of that would further one’s political agenda.
Didn’t shoot himself in the head to preserve the brain. Reminds me of the “Texas Tower Shooter” Charles Whitman.
I’ve heard a neuroscientist talk about this and conclude that this tumor could very well have been the cause for his behavior.