IMO, the typical approach of using fact-checking services to rate the accuracy of sources is inevitably flawed: if a source (or a fact checker) builds a reputation for reliability, it will eventually be suppressed or subverted into exploiting its reputation for other purposes.
A better option might be to treat all sources as potentially informative, but not at face value: rather, build a predictive model of each source, and treat as significant only those stories that deviate from prediction (i.e., stories that seem atypical for that source). Those are the stories most likely to convey information the source didn’t generate itself.
That’s certainly a good point, but I’m less concerned with how to verify information than with how to counteract the constant flow of misinformation — especially on other platforms where misinformation is deliberately pushed, which is causing major problems in my home country alone.
What I meant was that my question wasn’t about how to distinguish between reputable and unreliable sources – I think most Lemmy users are capable of doing that.
I was more interested in how we can effectively and meaningfully contribute to countering the flood of misinformation on social media (such as Twitter or meta apps).
The background to my question is the fact that this misinformation influences users’ opinions. I think, the US is the best example of where that can lead. Unfortunately, there are similar trends in my home country. Since I don’t want to be ruled by fascists, I thought I’d ask the community here what can be done.
But apparently I didn’t phrase the question very well.
What I meant was that my question wasn’t about how to distinguish between reputable and unreliable sources – I think most Lemmy users are capable of doing that.
Well that makes one of us. My experience is that most Lemmy users think Wikipedia was written by God himself.
IMO, the typical approach of using fact-checking services to rate the accuracy of sources is inevitably flawed: if a source (or a fact checker) builds a reputation for reliability, it will eventually be suppressed or subverted into exploiting its reputation for other purposes.
A better option might be to treat all sources as potentially informative, but not at face value: rather, build a predictive model of each source, and treat as significant only those stories that deviate from prediction (i.e., stories that seem atypical for that source). Those are the stories most likely to convey information the source didn’t generate itself.
That’s certainly a good point, but I’m less concerned with how to verify information than with how to counteract the constant flow of misinformation — especially on other platforms where misinformation is deliberately pushed, which is causing major problems in my home country alone.
How are you going to counter misinformation if you can’t determine what is and isn’t misinformation?
What makes you think I couldn’t tell the difference?
The fact that you said you’re concerned with verifying information
What I meant was that my question wasn’t about how to distinguish between reputable and unreliable sources – I think most Lemmy users are capable of doing that.
I was more interested in how we can effectively and meaningfully contribute to countering the flood of misinformation on social media (such as Twitter or meta apps).
The background to my question is the fact that this misinformation influences users’ opinions. I think, the US is the best example of where that can lead. Unfortunately, there are similar trends in my home country. Since I don’t want to be ruled by fascists, I thought I’d ask the community here what can be done.
But apparently I didn’t phrase the question very well.
Well that makes one of us. My experience is that most Lemmy users think Wikipedia was written by God himself.