

You want to do opposite of what your username suggests?
Wishing for my death or a World War. Either will do. Because FML or this world.
You want to do opposite of what your username suggests?
Typical CEO thinking number of lines of code is the same as productivity. What was the functionality of those 250k lines? Do arithmetic ops between two ints? Compute if an int is even?
That just goes to show how fragile power is.
Seriously though, I really hate that managers hate employees leaving early. Just how controlling do you want to be? Employees are not kids.
I didn’t ask to have BPD. I didn’t ask for inability to regulate emotions. I can only be me. I definitely cannot pretend to be relaxed or fun. That’s not just me. Relaxed side of me comes out slowly.
I am not asking for every woman to date me or even go out with me. All I am asking for is a bit of empathy. But if that’s too much, well. I cannot change anyone’s mind.
p.s., I wish I were not born. But, that is out of my hands too.
The best thing about being a human is that you can learn anything you want, to accomplish what you need to. Want to create an app, a framework, but don’t know how to code? Guess what, you can learn how to code. Want to write a story or an essay? You can learn how to write. Learning to satiate my curiosity about something; learning something so that I can accomplish something are the best things about my life. That is how I learnt programming. I don’t want anything to replace that for me, especially not some shit-generating LLM.
They left out, “for us” in the end.
Naah. I live with my parents because I am pathetic that way. Also, I don’t really go to concerts. My OG comment was just a joke at the expense of my circumstance.
Me , unemployed and single: I have no such weakness.
The art of LLMs completely enshittifying the internet. I am glad that I cut myself off from most of these ‘social’ media.
That’s a nice lion. Let her sleep.
This the reason we need “Stop Killing Games” to succeed.
Your big words scare anti-vaxxers.
I can assure you LLMs are, in general, not great at extracting concepts and work upon it, like a human mind is. LLMs are statistical parrots, that have learned to associate queries with certain output patterns like code chunks, or text chunks, etc. They are not really intelligent, certainly not like a human is. They cannot follow instructions like a human does, because of this. Problem is, they seem just intelligent enough that they can fool someone wanting to believe them to be intelligent even though there is no intelligence, by any measure, behind their replies.