The image shows the last state of a terminal emulator of person without command line or git knowledge. The person attempted to run
git commit
and is now blaming the result of a specific configuration on their system that launches a vi derivative on the vi derivative itself. This image is expected to convince the viewer that the vi derrivative is to blame.
Simon 𐕣he 🪨 Johnson
they/them
Lord, where are you going?
- 1 Post
- 30 Comments
Now I’m super curious about Gentoo and Portage. You don’t hear so much about compiling your own stuff anymore (probably because there’s less architectures around).
“Nobody” runs Gentoo anymore because most distros have taken the 80% optimizations you can do and just mainlined them. This was back in 2000’s where some distros weren’t even by default compiling with
-O2
. Gentoo usage just proved out that the underlying code was effectively-O3
safe in the 80% case and nobody was sneakily relying on C/C++ vagaries.I have much less time to tinker, but my favorite new bag is Fedora Atomic (currently using Bazzite on my main desktop). I’m incredibly interested in figuring out Nix though, but I haven’t had the time. Immutable distros are honestly something incredibly useful for both power users and normies. The main issues I’ve had with Fedora Atomic have really been around vagueness in the “standard” but they’re still figuring things out as far as I can tell.
The flag -O3 exists. Or just -funroll-loops. You shouldn’t even need -funroll-all-loops in this case, since hashes have a fixed size.
I sound way more competent with the flags than I am here, haha. Does Gentoo use an alternate compiler by default?
This is in reference to an ancient linux meme cw: slur
Ironically
'a'++
works in C/C++ because'a'
ischar
where in JS ‘a’ isstring
.
Yeah you’re actually right, it’s an
int
in C since K&R C didn’t havebool
, however it’s abool
in C++. I forget my standards sometimes, because like I said this doesn’t really matter. It’s just nerd trivia.https://en.cppreference.com/w/cpp/types/type_info/operator_cmp.html
There are plenty of sha1 implementations that are more readable and sensible and less readable and sensible. This portion is simply an manually unrolled loop (lmao these gcc nerds haven’t even heard of Gentoo) of the hash chunk computation rounds. Hash functions aren’t “impenetrable” they’re just math. You can write math programmatically in a way that explains the math.
The point of this post is actually things like
x[(I-3)&0x0f]
. It’s entirely the same concept as coercion to manipulate index values this way. What’s funny is that void pointer math, function pointer math, void pointers and function pointers in general are typically seen as “beyond the pale” for whatever reason.Beyond that if you know C you know why this is written this way with the parens. It’s because C has fucked up order of operations. For example
a + b == 7
is literally “does adding a + b equal 7”, but if you writea & b == 7
you would think it means “does a AND b equal 7”, but you’d be wrong. It actually means does b equal 7 AND a.Furthermore
a & (b ==7)
makes no sense because b == 7 is a boolean value. Bitwise ANDing a boolean value should not work because the width of the boolean is 1 bit and the width of the int is 8 bits. ANDing should fail because there’s 7 void bits between the two types. However the standard coerces booleans in these cases to fit the full width, coercing the void bits to 0’s to make bitwise ANDing make sense.Beyond that asking what the memory size of a variable in C is a fools errand because the real answer is “it depends” and “it also depends if someone decided to ignore what it typically depends on (compiler and platform) with some preprocessor fun”. Remember how I said “void pointers” are beyond the pale? Yeah the typical “why” of that is because they don’t have a known size, but remember the size of something for C is “it depends”. 🤷
Almost every language has idiosyncratic stuff like this, but some let you make up your own shit on top of that. These kinda low hanging fruit jokes are just people virtue signaling their nerddom (JS bad am rite guis, use a real language like C), when in reality this stuff is everywhere in imperative languages and typically doesn’t matter too much in practice. This isn’t even getting into idiosyncracies based on how computers understand numbers which is what subtracting from
0x5F3759DF
(fast inverse square root) references.
I thank god every day people who make these comics are too stupid to open gcc’s sha1.c because they’d see shit like:
#define M(I) ( tm = x[I&0x0f] ^ x[(I-14)&0x0f] \ ^ x[(I-8)&0x0f] ^ x[(I-3)&0x0f] \ , (x[I&0x0f] = rol(tm, 1)) ) #define R(A,B,C,D,E,F,K,M) do { E += rol( A, 5 ) \ + F( B, C, D ) \ + K \ + M; \ B = rol( B, 30 ); \ } while(0) R( a, b, c, d, e, F1, K1, x[ 0] ); R( e, a, b, c, d, F1, K1, x[ 1] ); R( d, e, a, b, c, F1, K1, x[ 2] ); R( c, d, e, a, b, F1, K1, x[ 3] ); R( b, c, d, e, a, F1, K1, x[ 4] ); R( a, b, c, d, e, F1, K1, x[ 5] ); R( e, a, b, c, d, F1, K1, x[ 6] ); R( d, e, a, b, c, F1, K1, x[ 7] ); R( c, d, e, a, b, F1, K1, x[ 8] ); R( b, c, d, e, a, F1, K1, x[ 9] ); R( a, b, c, d, e, F1, K1, x[10] ); R( e, a, b, c, d, F1, K1, x[11] ); R( d, e, a, b, c, F1, K1, x[12] ); R( c, d, e, a, b, F1, K1, x[13] ); R( b, c, d, e, a, F1, K1, x[14] ); R( a, b, c, d, e, F1, K1, x[15] ); R( dee, dee, dee, baa, dee, F1, K1, x[16] ); R( bee, do, do, dee, baa, F1, K1, x[17] ); R( dee, bee, do, dee, dee, F1, K1, x[18] ); R( dee, dee, dee, ba, dee, F1, K1, x[19] ); R( d, a, y, d, o, F1, K1, x[20] );
And think, yeah this is real programming. Remember the difference between being smart and incredibly stupid is what language you write it in. Using seemingly nonsensical coercion and operator overloaded is cringe, making your own nonsensical coercion and operator overloads is based.
That’s why you should never subtract things from
0x5F3759DF
in any language other than C.
Demo Driven Development is wayyy worse.
Lol the suggested hardware for usable performance is $50k for the GPUs alone and that’s an SXM5 socket so all proprietary extremely expensive and specific hardware.
My PC currently has a 7900 XTX which gives me about 156 GB combined VRAM, but it literally generates 1-3 words per second even at this level. DDR5 wouldn’t really help, because it’s a memory bandwidth issue.
TBH for most reasonable use cases 8 bit parameter size quantizations that can run on a laptop will give you more or less what you want.
This already happens in enterprise code bases with dummies running the show and juniors coding. Every primitive is actually a god object that can work at any level of the software stack.
Thomas L Friedman, of all people, saying that you can’t just put your feet up and coast because you have a Pulitzer is sending me.
It must be so hard to write Atlanticist propaganda laundering the reputation of the West’s newest darling that’s crucial to fulfilling the newest hare brained scheme to keep the empire together.
Simon 𐕣he 🪨 Johnson@lemmy.mlOPto Linux@lemmy.ml•In 2025 Fedora Silverblue has better plug and play than OSX....8·3 months agoLaptops specifically have been such an Achilles heel for Linux due to driver issues and battery issues. I honestly would just rather stick with OSX and containerize. The thing that might test that is X86 support lapsing at least for some of my MBPs.
Simon 𐕣he 🪨 Johnson@lemmy.mlto Linux@lemmy.ml•When did you start working around with Linux?3·3 months agolol. This is my story as well, except I wrecked my XP MBR and the CD was in Dr. Dobbs that my dad had a sub thru his work from. I was too impatient to wait for him to bring home an XP install CD.
Simon 𐕣he 🪨 Johnson@lemmy.mlto Linux@lemmy.ml•When did you start working around with Linux?4·3 months ago- I was 11. My dad had bunch of Linux install CDs that came with Dr. Dobbs. I fucked up my XP MBR and asked him to bring home a XP install disk cause i lost all mine.
By the time he got home I had installed Mandrake Dolphin Linux on my PC.
you are restricted to a set of statements that can be expressed using a particular type system
What I’m saying is that most good static typing systems do not practically have such limitations, you’d be very hard pressed to find them and they’d be fairly illogical. Most static typing systems that are used in enterprise do have limitations because they are garbage.
So in such shitty type systems you often have
code that’s written for the benefit of the type checker rather than a human reading it
. In good type systems any code that’s written for the benefit of the type checker is often an antipattern.For example, Lemmy devs prefer this trade off and it has nothing to do with enterprise workflows.
Rust has HKT support through GATs and typeclass support thru traits. Rust has minimal code you write for the benefit of the type checker.
Typescript technically has HKT support but it’s a coincidence and the Typescript team doesn’t care about it, since the beginning Typescript was made to be Enterprise Javascript by Microsoft. Though systems like fp-ts exist they’re hard to get rolling in enterprise.
Typescript does have problems with
code that’s written for the benefit of the type checker rather than a human reading it
in a large part due to inefficiencies of the compiler itself. In a small part due to some corner cases that still exist because even though it’s type system while more advanced than others in it’s enterprise grade class, it’s still written in that style for that purpose so the inconsistencies it makes to support the janky workflow (plus some EMCA stuff e.g.Promise
is not functionally typeable since the spec breaks set theory for convenience reasons) leads to that problem.However in Typescript these are avoidable problems and you are able to write code without dealing with the type checker’s bullshit a good amount of the time if you follow the correct patterns – certainly better than any other “enterprise grade” static typing system.
deleted by creator
Static typing itself is a trade off as well. It introduces mental overhead because you are restricted to a set of statements that can be expressed using a particular type system, and this can lead to code that’s written for the benefit of the type checker rather than a human reading it. Everything is a trade off in practice.
You mean code that’s written to the benefit of a low efficiency enterprise workflow, which is my love hate relationship with Typescript. Best out choice out of a pile of shit.
Why not just run a hypervisor and use containers?
That’s been the opposite of my experience using Clojure professionally. You’re actually far more likely to refactor and clean things up when you have a fast feedback loop. Once you’ve figured out a solution, it’s very easy to break things up, and refactor, then just run the code again and make sure it still works. The more barriers you have there the more likely you are to just leave the code as is once you get it working.
This is more of a how the saussage is made issue in my experience than a tooling selection issue. Clojure may make it easier to do the right thing but the actual forcing function is the company culture. Self-selection of Clojure as the company’s tooling may create a correlation.
Most companies have the ability to implement fail fast workflows for their developers they simply choose not to because it’s “hard”. My preferred one is Behavior Driven Development because it forces you to constrain problems into smaller domains/behaviors.
When you’re dealing with types or classes they exist within the context they’re defined in. Whenever you go from one context to another, you have to effectively copy the data to a new container to use it. With Clojure, you have a single set of common data structures that are used throughout the language. Any data you get from a library or a component in an application can be used directly without any additional ceremony.
An Adapter is typically a piece of code that transforms data between formats at various boundaries. Typeclasses remove the need for Adapters for functionality at library boundaries e.g. most thing have
map
where in javascript I can’t do{}.map
with the EMCA standard. However typeclasses do not solve the problem of the literal data format and functionality differences between different implementations.For example I call some API using a Client and it returns bad gross data based on how that API is written, I would use an Adapter to transform that data into clean organized data my system works with. This is extremely helpful when your system and the external system have references to each other, but your data taxonomy differs.
A real example is that Google Maps used to have a distance matrix API where it would literally calculate matrices for you based on the locations you submit. Circa 2018 Google changed it’s billing driving up the prices, which lead a lot of people to use alternative services like Here.com. Here.com does not have a distance matrix API. So in order to build a distance matrix I needed to write an adapter that made N calls instead of Google’s 1 call and then stuff the Here.com responses into a matrix response compatible with Google’s API which we unfortunately were using directly without an internal representation.
These concepts are still used/useful within functional contexts because they are not technical concepts they are semantic concepts. In functional languages an Adapter may just be a function that your responses are mapped over, in OOP style it might be a class that calls specific network client and mangles the data in some other way. Regardless of the technical code format, it still is the same concept and procedural form, thus it’s still a useful convention.
We grasp the core point: vim is not typical. This is not insightful.
What we care more about is the link to the jobs portal of the company there will be an opening at soon that uses vim as it’s standard dev tool chain.