• 0 Posts
  • 29 Comments
Joined 2 years ago
cake
Cake day: June 24th, 2023

help-circle
  • I think that is less of a problem with the technology itself, but rather in how it is currently used or created. I wouldn’t say that anything generated with AI is stolen work, as that predicates that AI necessarily involves stealing.

    I vaguely remember Adobe Firefly using images only with proper licensing to the point they will allow themselves to be legally held responsible (though some AI generated work did make it into their stock image site, which makes the ethics part vague, even if it will in all likelihood be legally impossible to pin down). Sadly, this is Adobe, and this stuff is all behind closed doors, you have to pay them pretty significant sum and you can’t really mess with the internals there.

    So for now there is a choice between ethics, openness, and capability (pick at most two). Which, frankly, is a terrible state to be in.


  • The difference is photography can be art, but it isn’t always. Photo composition and content are used to convey meaning. The photo is a tool under the artist’s complete control. The photo is not art on its own. Just like if you accidentally spill paint on a canvas it isn’t necessarily art, a photo taken without intent isn’t necessarily art. If I accidentally hit the camera button on my phone that doesn’t make me a photographer.

    I don’t completely agree. While an accident is one example where intent is missing, publishing accidental shots could be a form of art in its own way as the act of publishing itself has intent associated with it.

    Furthermore, nature photography is in my view also art, but provides much less control than studio photography, as the scene and subject are free to do whatever they want.

    AI generated images can not do this. The user can give a prompt, but they don’t actually have control over the tool. They can modify their prompt to get different outputs, but the tool is doing its own thing. The user just has to keep trying until they get an output they like, but it isn’t done by their control. It’s similar to a user always accidentally doing things, until they get what they want. If you record every moment of your life you’re likely to have some frames that look good, but you aren’t a photographer because you didn’t intend to get that output.

    I don’t think recording everything would make it less of an artpiece: you would have intentionally chosen to record continuously to capture that frame, and skimmed though those frames to find the right one. Like splattering paint on a canvas intentionally, you don’t intend to control the full picture - where the paint ends up - but rather the conceptual idea of a splatter of paint, leaving the details, in part, up to physics.

    There are limits to what repeatedly prompting an AI model can do, but that doesn’t stop you from doing other things with the output, or toying with how it functions or how it is used, as my example shows.

    While I wouldn’t discount something if it was created using AI, I need there to be something for me to interact with or think about in a piece of art. As the creation of an image is effectively done by probability, anything missing in the prompt will in all likelihood be filled with a probabilistically plausible answer, which makes the output rather boring and uninteresting. This doesn’t mean that AI cannot be used to create art, but it does mean you need to put in some effort to make it so.




  • The same thing happened to photography, and other kinds of modern art, too. Things are often excluded from being art until they are included (to at least a subset of people).

    With AI it is often questionable how much ‘intent’ someone has put into a work: ‘wrote a simple trivial prompt, generated a few images, shared all of them’ results in uninteresting slop, while ‘spent a lot of time to make the AI generate exactly what you want, even coming up with weird ways to use the model (like this / non-archive link)’ is a lot more interesting in my view.






  • I would love if things weren’t as bad as they looked, but…

    Most of the destruction of buildings in Gaza is of empty buildings with no inhabitants. The IDF blows up or bulldozes buildings when they find booby traps in them, have tunnel entrances, provide military advantage, were used for weapons storage or command, were used as sniper or RPG nests, block lines of sight, to clear security corridors, space for military camps and operations, and so on. The list of reasons is long and liberally applied by the bulldozer operators and sappers on the ground.

    (emphasis mine) While destroying military targets is fair, pretty much every building blocks line of sight, including civilian housing, shops, hospitals, and so on. If applied liberally, this essentially amounts to destroy all buildings. Having your house (and nearby facilities, like shops, schools, hospitals) bulldozed will have a severe negative impact on your ability to live, even if you don’t die in the bulldozing or destruction of your house.

    The IDF warns before major operations and then almost all civilians leave the area. The evacuation of Rafah is a good example for this. There are also targeted attacks, usually by air, in non evacuated areas, but these are only responsible for a small fraction of the destruction.

    (emphasis mine) While the IDF does do this, and this avoids immediate death for many, it still deprives people of human right to housing. Furthermore, a warning does not provide those who evacuate / flee with housing, food and water - for these there are currently significant shortages, while acting on the warning will have a severe negative impact on being able to provide for oneself - one can only carry so much. A disregard for innocent human lives isn’t just civilian deaths, it is also the deprivation of resources that one needs to live.


  • It says ‘a neighborhood’ not 'one neighborhood '. Furthermore, in the article, it specifically mentions it represents other neighborhoods in Gaza.

    A neighborhood provides an example of the disregard for innocent human lives behind the Israeli attacks, with visual proof provided by satellite imagery, even if it is one of many.

    Stating one neighborhood would imply it is the only one. While the NY Times does not have the best track record, it is needlessly reductive for an article that shows what is happening in Gaza. Especially as a picture of a neighborhood can actually be more impactful than the whole: close enough that you can see individual places where people leave, far enough to see the extent of destruction.


  • Also ImageTragick was a thing, there are definitely security implications to adding dependencies to implement a feature in this way (especially on a shared instance). The API at the very least needs to handle auth, so that your images and videos don’t get rotated by others.

    Then you have UX, you may want to show to the user that things have rotated (otherwise button will be deemed non-functional, even if it uses this one-liner behind the scenes), but probably don’t want to transfer the entire video multiple times to show this (too slow, costs data).

    Yeah, it is one thing to add a one liner, but another to make a well implemented feature.


  • At least the EU is somewhat privacy friendly here (excluding the Google tie in) compared to whatever data sharing and privacy mess the UK has obligated people to do with sharing ID pictures or selfies.

    Proving you are 18+ through zero knowledge proof (i.e. other party gets no more information than being 18+) where the proof is generated on your own device locally based on a government signed date of birth (government only issues an ID, doesn’t see what you do exactly) is probably the least privacy intrusive way to do this, barring not checking anything at all.


  • It is complicated. It is not technically always, but in practice is may very well be. As this page (in Dutch) notes that, unless the driver can show that ‘overmacht’ applies (they couldn’t have performed any action that would have avoided or reduced bodily harm), they are (at least in part) responsible for damages. For example, not engaging the brakes as soon as it is clear that you would hit them, would still result in them being (partially) liable for costs, even if the cyclist made an error themselves (crossing a red light).

    Because the burden of proof is on the driver, it may be hard to prove that this is the case, resulting in their insurance having to pay up even if they did not do anything wrong.



  • Wouldn’t the algorithm that creates these models in the first place fit the bill? Given that it takes a bunch of text data, and manages to organize this in such a fashion that the resulting model can combine knowledge from pieces of text, I would argue so.

    What is understanding knowledge anyways? Wouldn’t humans not fit the bill either, given that for most of our knowledge we do not know why it is the way it is, or even had rules that were - in hindsight - incorrect?

    If a model is more capable of solving a problem than an average human being, isn’t it, in its own way, some form of intelligent? And, to take things to the utter extreme, wouldn’t evolution itself be intelligent, given that it causes intelligent behavior to emerge, for example, viruses adapting to external threats? What about an (iterative) optimization algorithm that finds solutions that no human would be able to find?

    Intellegence has a very clear definition.

    I would disagree, it is probably one of the most hard to define things out there, which has changed greatly with time, and is core to the study of philosophy. Every time a being or thing fits a definition of intelligent, the definition often altered to exclude, as has been done many times.


  • The flute doesn’t make for a good example, as the end user can take it and modify it as they wish, including third party parts.

    If we force it: It would be if the manufacturer made it such that all (even third party) parts for These flutes can only be distributed through their store, and they use this restriction to force any third party to comply with additional requirements.

    The key problem is isn’t including third party parts, it is actively blocking the usage of third party parts, forcing additional rules (which affect existing markets, like payment processors) upon them, making use of control and market dominance to accomplish this.

    The Microsoft case was, in my view, weaker than this case against Apple, but their significant market dominance in the desktop OS market made it such that it was deemed anti-competitive anyways. It probably did not help that web standards suffered greatly when MS was at the helm, and making a competitive compatible browser was nigh impossible: most websites were designed for IE, using IE specific tech, effectively locking users into using IE. Because all users were using IE, developing a website using different tech was effectively useless, as users would, for other websites, end up using IE anyways. As IE was effectively the Windows browser (ignoring the brief period for IE for Mac…), this effectively ensured the Windows dominance too. Note that, without market dominance, websites would not pander specifically to IE, and this specific tie-in would be much less problematic.

    In the end, Google ended IE’s reign by using Google Chrome, advertising it using the Google search engine’s reach. But if Microsoft had locked down the OS, like Apple does, and required everything to go through their ‘app store’. I don’t doubt we would have ended up with a similar browser engine restriction that Apple has, with all browsers being effectively a wrapper around the exact same underlying browser.



  • Yes, true, but that is assuming:

    1. Any potential future improvement solely comes from ingesting more useful data.
    2. That the amount of data produced is not ever increasing (even excluding AI slop).
    3. No (new) techniques that makes it more efficient in terms of data required to train are published or engineered.
    4. No (new) techniques that improve reliability are used, e.g. by specializing it for code auditing specifically.

    What the author of the blogpost has shown is that it can find useful issues even now. If you apply this to a codebase, have a human categorize issues by real / fake, and train the thing to make it more likely to generate real issues and less likely to generate false positives, it could still be improved specifically for this application. That does not require nearly as much data as general improvements.

    While I agree that improvements are not a given, I wouldn’t assume that it could never happen anymore. Despite these companies effectively exhausting all of the text on the internet, currently improvements are still being made left-right-and-center. If the many billions they are spending improve these models such that we have a fancy new tool for ensuring our software is more safe and secure: great! If it ends up being an endless money pit, and nothing ever comes from it, oh well. I’ll just wait-and-see which of the two will be the case.


  • Not quite, though. In the blogpost the pentester notes that it found a similar issue (that he overlooked) that occurred elsewhere, in the logoff handler, which the pentester noted and verified when spitting through a number of the reports it generated. Additionally, the pentester noted that the fix it supplied accounted for (and documented) a issue that it accounted for, that his own suggested fix for the issue was (still) susceptible to. This shows that it could be(come) a new tool that allows us to identify issues that are not found with techniques like fuzzing and can even be overlooked by a pentester actively searching for them, never mind a kernel programmer.

    Now, these models generate a ton of false positives, which make the signal-to-noise ratio still much higher than what would be preferred. But the fact that a language model can locate and identify these issues at all, even if sporadically, is already orders of magnitude more than what I would have expected initially. I would have expected it to only hallucinate issues, not finding anything that is remotely like an actual security issue. Much like the spam the curl project is experiencing.