

Yeah, that’s the thing.
The gaming market only barely exists at this point. That’s why Nvidia can ignore the gaming market for as long as they want to.
Yeah, that’s the thing.
The gaming market only barely exists at this point. That’s why Nvidia can ignore the gaming market for as long as they want to.
Pheasants gamers buy cheap inference cards gaming cards.
The absolute majority of Nvidias sales globally are top-of-the-line AI SKUs. Gaming cards are just a way of letting data scientists and developers have cheap CUDA hardware at home (while allowing some Cyberpunk), so they keep buying NVL clusters at work.
Nvidia’s networking division is probably a greater revenue stream than gaming GPUs.
the H200 has a very impressive bandwith of 4.89 TB/s, but for the same price you can get 37 TB/s spread across 58 RX 9070s, but if this actually works in practice i don’t know.
Your math checks out, but only for some workloads. Other workloads scale out like shit, and then you want all your bandwidth concentrated. At some point you’ll also want to consider power draw:
Now include power and cooling over a few years and do the same calculations.
As for apples and oranges, this is why you can’t look at the marketing numbers, you need to benchmark your workload yourself.
Well, a few issues:
For fun, home use, research or small time hacking? Sure, buy all the gaming cards you can. If you actually need support and have a commercial use case? Pony up. Either way, benchmark your workload, don’t look at marketing numbers.
Is it a scam? Of course, but you can’t avoid it.
Your numbers are old. If you are building today with anyone ad much as mentioning AI, you might as well consider 100kW/rack as ”normal”. An off-the-shelf CPU today runs at 500W, and you usually have two of them per server, along with memory, storage and networking. With old school 1U pizza boxes, that’s basically 100kW/rack. If you start adding GPUs, just double or quadruple power density right off the bat. Of course, assume everything is direct liquid cooled.
I’ll just go ahead and start the flame war.
I totally agree with the functionality of systemd. We need that. But the implementation… Why the fuck do we need to cram everything into pid 1? At least delegate the parsing into another process, god damn. And could we all just agree that ’systemd-{networkd,resolved,homed}’ don’t really have a reason to exist, and definitely not that coupled to a fucking init system. Systemd-timers are wonderful, but why are we running cron-but-better in pid 1?
We have an init-system where the developers are afraid of using things like processes and separation of privileges. I’m just tired of patching fleets of servers in panic every time Pöttering’s bad design decisions hit the fan with their CVEs and consequences.
I also hate that warning, but it’s basically ”Can’t fit your text, with the font and properties you specified, into the box you specified without making it look like ass”
Easiest way to preserve formatting is to reword the text. Then again, would be nice if it didn’t happen all the time in my normal paragraphs as soon as I use a word with more than 10 characters…
Reword your text to fit.
The thing is - wayland does kind of prevent it by forcing the GPU into the rendering pipeline far harder than Xorg. The GPU-assumptions throughout the code base(s) makes latency shoot through the roof when running software rendered. If you want decent latency, you need a GPU, and if you want to run multiuser you are going to pay Nvidia a shitton of money.
I can also imagine it’s hard (impossible?) to do performant damage tracking in a VNC server without implementing at least parts of the VNC server inside the compositor. This means that the compositor and VNC server gets tightly coupled by necessity. Choice will be limited. Would you like the bad DE with the good VNC server or the good DE with the bad VNC server? Bad damage tracking means shit latency and high bandwidth usage, or other tradeoffs. So even if someone managed to implement what I want on Wayland, it would most likely be limited to a single compositor and not a general solution allowing a free choice of compositor.
Best software suite I know of for it is Cendio Thinlinc, on top of TigerVNC. Free for up to 5 users. There are some others in the same niche. My recommendation would be to try Thinlinc on Rocky 9 or Ubuntu 24, and configure it to use XFCE. Mate, KDE, or Cinnamon, all work fine. Turn off compositing! Over a good WAN-link it feels mostly local unless playing fullscreen videos. On a LAN-link, the only thing giving it away is extra tearing and compression artifacts when playing youtube-videos fullscreen. Compared to many others solutions I have tried, the latency and ”immersion” is incredible.
As for me, I’ll try to never manage linux desktop fleets or remote desktops again.
What I’ve seen of rustdesk so far is that it’s absolutely not even close to the options available for X. It replaces TeamViewer, not thin clients.
You would need the following to get viability in my eyes:
This isn’t even an edge case. Current and upcoming regulations on information security drags the entire industry this way. Medical, research, defence, banking, basically every regulated landscape gets easier to work in when going down this route. Close to zero worries about endpoint security. Microsoft is working hard on this. It’s easy to do with X. And the best thing on Wayland is RustDesk? As stated earlier, these issues were brought up and discarded as FUD in 2008, and here we are.
Wayland isn’t a better replacement, after 15 years it’s still not a replacement. The Wayland implementations certainly haven’t been rushed, but the architecture was. At this point, fucking Arcan will be viable before Wayland.
Exactly my point. The issues people consider ”solved” with wayland today will be solved in production in 3-5 years.
People are still running RHEL 7, and Wayland in RHEL 9 isn’t that polished. In 4-5 years when RHEL 10 lands, it might start to be usable. Oh right, then we need another few years for vendors to port garbage software that’s absolutely mission critical and barely works on Xorg, sure as fuck won’t work in xwayland. I’m betting several large RHEL-clients will either remain on RHEL8 far past EOL or just switch to alternative distros.
Basically, Xorg might be dead, but in some (paying commercial) contexts, Wayland won’t be a viable option within the next 5-10 years.
Yeah, the few thousand users I managed desktops for will remain on X for the next few years last I heard from my old colleagues.
Because of my points above
But good that your laptop works now and that I can help my grandma over teamviewer again.
Please note that the nominal FLOP/s from both Nvidia and Huawei are kinda bullshit. What precision we run at greatly affect that number. Nvidias marketing nowadays refer to fp4 tensor operations. Traditionally, FLOP/s are measured with fp64 matrix-matrix multiplication. That’s a lot more bits per FLOP.
Also, that GPU-GPU bandwidth is kinda shit compared to Nvidias marketing numbers if I’m parsing correctly (NVLink is 18x 10GB/s links per GPU, big ’B’ in GB). I might read the numbers incorrectly, but anyway. How and if they manage multi-GPU cache coherency will be interesting to see. Nvidia and AMD both do (to varying degrees) have cache coherency in those settings. Developer experience matters…
Now, the real interesting thing is power draw, density and price. Power draw and price obviously influence TCO. On 7nm, I guess the power bill won’t be very fun to read, but that’s just a guess. The density influences network options - are DAC-cables viable at all, or is it (more expensive) optical all the way?
There is actually less to ’xkill’. It nukes the X window from orbit in a very violent manner. The owning process(-tree) will usually just instantly curl up and die.
The main benefit is that it doesn’t actually kill the process, it only nukes the window. As such, you can get rid of windows belonging to otherwise unkillable processes (zombies, etc).
Also, it’s fun. Just don’t miss the window and accidentally kill your WM. (Beat that Wayland)
Now consider that most enterprises are about five years behind that. Takes a few years before what’s available in Fedora trickles down to RHEL, and a few more years before it’s rolled out to clients. Ubuntu is on a similar timeline.
The fixes you got two years ago might be rolled out in 3 years in these places. Oh, and these are the people forking up much of the money for the Wayland development efforts. The current state of Wayland if you pay for it is kinda meh.
I’ll bite. It’s getting better, but still a long way to go.
But what do I know, I’ve only deployed and managed desktop linux for a few thousand people. People were screaming about these design flaws back in 2008 when this all started. The criticisms above were known and dismissed as FUD, and here we are. A few architectural changes back then, and we could have done this migration a decade faster. Just imagine, screen sharing during the pandemic!
As an example, see Arcan, a small research project with an impressively large subset of features from both X11 and Wayland (including working screen sharing, network transparency and a functioning security model). I wouldn’t use it in production, but if it was more than one guy in a basement working on it, it would probably be very usable fairly fast, compared to the decade and half that RedHat and friends have poured into Wayland thus far. Using a good architecture from the start would have done wonders. And Wayland isn’t even close to a good architecture. It’s just what we have to work with now.
Hopefully Xorg can die at some point, a decade or so from now. I’m just glad I don’t work with desktops anymore, the swap to Wayland will be painful for a lot of organisations.
Rough start? It’s been over a decade and it’s still rough.
You have FreeIPA if you want a ”product”.
But honestly, if I, as a Linux admin, would do this kind of thing at this scale, I’d probably elect to remain on AD.
Here be dragons. But basically:
Run a VM from contents of a physical disk: use ’dd’ to create disk image. If on linux, try to boot and fix all the errors, hopefully few.
Run VM as physical machine: other way around.
You won’t find this in a tutorial. You need to understand concepts, read manuals, fit everything together, execute, fail and retry until it works.
For Windows, I have no idea. Conceptually, I figure it’s similar.
Most arch users are casuals that finally figured out how to read a manual. Then you have the 1% of arch users who are writing the manual…
It’s the Gentoo and BSD users we should fear and respect, walking quietly with a big stick of competence.