

TL;DR: While governments are putting out assurances AI won’t make the final decision to launch nuclear weapons, they are tight-lipped about whether they are putting AI in the information gathering and processing components that advise world leaders making the decision to launch nuclear weapons. In risk assessment, there’s little difference between wrong AI making the launch decision and a human informed by wrong AI making the launch decision.

How much you want to bet SCOTUS blocks California’s redistricting but greenlights Missouri and North Carolina maps, each through tortured logic?