- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
I like my project manager, they find me work, ask how I’m doing and talk straight.
It’s when the CEO/CTO/CFO speaks where my eyes glaze over, my mouth sags, and I bounce my neck at prompted intervals as my brain retreats into itself as it frantically tosses words and phrases into the meaning grinder and cranks the wheel, only for nothing to come out of it time and time again.
COs are corporate politicians, media trained to only say things which are completely unrevealing and lacking of any substance.
This is by design so that sensitive information is centrally controlled, leaks are difficult, and sudden changes in direction cause the minimum amount of whiplash to ICs as possible.
I have the same reaction as you, but the system is working as intended. Better to just shut it out as you described and use the time to think about that issue you’re having on a personal project or what toy to buy for your cat’s birthday.
I just turn of my camera and turn on Forza Motorsport or something like that
It’s ironic how conservative the spending actually is.
Awesome ML papers and ideas come out every week. Low power training/inference optimizations, fundamental changes in the math like bitnet, new attention mechanisms, cool tools to make models more controllable and steerable and grounded. This is all getting funded, right?
No.
Universities and such are seeding and putting out all this research, but the big model trainers holding the purse strings/GPU clusters are not using them. They just keep releasing very similar, mostly bog standard transformers models over and over again, bar a tiny expense for a little experiment here and there. In other words, it’s full corporate: tiny, guaranteed incremental improvements without changing much, and no sharing with each other. It’s hilariously inefficient. And it relies on lies and jawboning from people like Sam Altman.
Deepseek is what happens when a company is smart but resource constrained. An order of magnitude more efficient, and even their architecture was very conservative.
wait so the people doing the work don’t get paid and the people who get paid steal from others?
that is just so uncharacteristic of capitalism, what a surprise
It’s also cultish.
Everyone was trying to ape ChatGPT. Now they’re rushing to ape Deepseek R1, since that’s what is trending on social media.
It’s very late stage capitalism, yes, but that doesn’t come close to painting the whole picture. There’s a lot of groupthink, an urgency to “catch up and ship” and look good quick rather than focus experimentation, sane applications and such. When I think of shitty capitalism, I think of stagnant entities like shitty publishers, dysfunctional departments, consumers abuse, things like that.
This sector is trying to innovate and make something efficient, but it’s like the purse holders and researchers have horse blinders on. Like they are completely captured by social media hype and can’t see much past that.
I went to CES this year and I sat on a few ai panels. This is actually not far off. Some said yah this is right but multiple panels I went to said that this is a dead end, and while usefull they are starting down different paths.
Its not bad, just we are finding it’s nor great.
They’re throwing billions upon billions into a technology with extremely limited use cases and a novelty, at best. My god, even drones fared better in the long run.
Nah, generative ai is pretty remarkably useful for software development. I’ve written dozens of product updates with tools like claudecode and cursorai, dismissing it as a novelty is reductive and straight up incorrect
As someone starting a small business, it has helped tremendously. I use a lot of image generation.
If that didn’t exist, I’d either has to use crappy looking clip art or pay a designer which I literally can’t afford.
Now my projects actually look good. It makes my first projects look like a highschooler did them last minute.
There are many other uses, but I rely on it daily. My business can exist without it, but the quality of my product is significantly better and the cost to create it is much lower.
Your product is other people’s work thrown in a blender.
Congrats.
Wait til you realize that’s just what art literally is…
You’re confusing ai art with actual art, like rendered from illustration and paintings
it’s as much “real” art as photography, taking a relatively finite number of decisions and finding something that looks “good”.
Really good photography is actually pretty hard and the best photographers are in high demand.
It involves a ton of settings for the camera, frequently post processing to balance out anything that wasn’t perfect during the shoot. Plus there is a ton of blocking, lighting, and if doing portraits and other planned shoots there is a lot of directing involved in getting the subjects to be in the right positions/showing the right emotions, etc. Even shooting nature requires a massive amount of planning and work beyond a few camera settings.
Hell, even stock photos tend to be a lot of work to set up!
If you think that someone taking a photo in focus with adequate lighting and posted it to instagram is the same as professional photography, then you have no idea what is involved.
I mean it’s pretty clear they’re desperate to cut human workers out of the picture so they don’t have to pay employees that need things like emotional support, food, and sleep.
They want a workslave that never demands better conditions, that’s it. That’s the play. Period.
And the tragedy of the whole situation is that they can‘t win because if every worker is replaced by an algorithm or a robot then who‘s going to buy your products? Nobody has money because nobody has a job. And so the economy will shift to producing war machines that fight each other for territory to build more war machine factories until you can’t expand anymore for one reason or another. Then the entire system will collapse like the Roman Empire and we start from scratch.
Why would you need anyone to buy your products when you can just enjoy them yourself?
Because there’s always a bigger fish out there to get you. Or that’s what trillionaires will tell themselves when they wage a robotic war. This system isn’t made to last the way it’s progressing right now.
Optimizing AI performance by “scaling” is lazy and wasteful.
Reminds me of back in the early 2000s when someone would say don’t worry about performance, GHz will always go up.
don’t worry about performance, GHz will always go up
TF2 devs lol
Thing is, same as with GHz, you have to do it as much as you can until the gains get too small. You do that, then you move on to the next optimization. Like ai has and is now optimizing test time compute, token quality, and other areas.
Removed by mod
Why didn’t you drop the quotes from Turing, Minsky, and Lovelace?
Removed by mod
The cope on this site is so bad sometimes. AI is already revolutionary.
That may be true technologically. But if the economics don’t add up it’s a bubble.
Even the open models released today you can run on your own can boost your productivity massively if you know what you’re doing. Most people here are just too daft to know what they’re doing and parrot whatever shite memes have told them to think.
It’s neither, and business majors shouldn’t have voting rights as non-sapient humans.
Ya about as revolutionary as my left nut
Removed by mod
Does your left nut give people 20:10 vision? Because AI already is. Can it detect cancer before a human can? Is it accelerating fighting antibiotic resistance, protein synthesis, and testing new medications?
Yes. Believe it or not my left nut can do those things.
LLMs are good for learning, brainstorming, and mundane writing tasks.
deleted by creator
Analyzing text from a different point of view than your own. I call that “synthetic second opinion”
The actual survey result:
Asked whether “scaling up” current AI approaches could lead to achieving artificial general intelligence (AGI), or a general purpose AI that matches or surpasses human cognition, an overwhelming 76 percent of respondents said it was “unlikely” or “very unlikely” to succeed.
So they’re not saying the entire industry is a dead end, or even that the newest phase is. They’re just saying they don’t think this current technology will make AGI when scaled. I think most people agree, including the investors pouring billions into this. They arent betting this will turn to agi, they’re betting that they have some application for the current ai. Are some of those applications dead ends, most definitely, are some of them revolutionary, maybe
Thus would be like asking a researcher in the 90s that if they scaled up the bandwidth and computing power of the average internet user would we see a vastly connected media sharing network, they’d probably say no. It took more than a decade of software, cultural and societal development to discover the applications for the internet.
It’s becoming clear from the data that more error correction needs exponentially more data. I suspect that pretty soon we will realize that what’s been built is a glorified homework cheater and a better search engine.
what’s been built is a glorified homework cheater and an
betterunreliable search engine.
I used to support an IVA cluster. Now the only thing I use AI for is voice controls to set timers on my phone.
I use chatgpt daily in my business. But I use it more as a guide then a real replacement.
Technology in most cases progresses on a logarithmic scale when innovation isn’t prioritized. We’ve basically reached the plateau of what LLMs can currently do without a breakthrough. They could absorb all the information on the internet and not even come close to what they say it is. These days we’re in the “bells and whistles” phase where they add unnecessary bullshit to make it seem new like adding 5 cameras to a phone or adding touchscreens to cars. Things that make something seem fancy by slapping buzzwords and features nobody needs without needing to actually change anything but bump up the price.
I remember listening to a podcast that is about scientific explanations. The guy hosting it is very knowledgeable about this subject, does his research and talks to experts when the subject involves something he isn’t himself an expert.
There was this episode where he kinda got into the topic of how technology only evolves with science (because you need to understand the stuff you’re doing and you need a theory of how it works before you make new assumptions and test those assumptions). He gave an example of the Apple visionPro being a machine that despite being new (the hardware capabilities, at least), the algorithm for tracking eyes they use was developed decades ago and was already well understood and proven correct by other applications.
So his point in the episode is that real innovation just can’t be rushed by throwing money or more people at a problem. Because real innovation takes real scientists having novel insights and experiments to expand the knowledge we have. Sometimes those insights are completely random, often you need to have a whole career in that field and sometimes it takes a new genius to revolutionize it (think Newton and Einstein).
Even the current wave of LLMs are simply a product of the Google’s paper that showed we could parallelize language models, leading to the creation of “larger language models”. That was Google doing science. But you can’t control when some new breakthrough is discovered, and LLMs are subject to this constraint.
In fact, the only practice we know that actually accelerates science is the collaboration of scientists around the world, the publishing of reproducible papers so that others can expand upon and have insights you didn’t even think about, and so on.
This also shows why the current neglect of basic/general research without a profit goal is holding back innovation.
I think the first llm that introduces a good personality will be the winner. I don’t care if the AI seems deranged and seems to hate all humans to me that’s more approachable than a boring AI that constantly insists it’s right and ends the conversation.
I want an AI that argues with me and calls me a useless bag of meat when I disagree with it. Basically I want a personality.
I’m not AI but I’d like to say thay thing to you at no cost at all you useless bag of meat.
Good let them waste all their money