Once upon a time, a scientist was driving fast
In a car full of weaponized superebola. It was raining heavily so he couldn’t see clearly where he was going. His passenger said calmly, “Quick question: what the fuck?” “Don’t worry,” said the scientist. “Since I can’t see clearly, we don’t know we’re going to hit anything and accidentally release a virus that kills all humans.” As he said this, they hit a tree, released the virus, and everybody died slow horrible deaths. The End The moral of the story is that if there’s more uncertainty, you should go slower and more cautiously. Sometimes people say that we can’t know if creating a digital species (AI) is going to harm us. Predicting the future is hard, therefore we should go as fast as possible. And I agree - there is a ton of uncertainty around what will happen. It could be one of the best inventions we ever make. It could also be the worst, and make nuclear weapons look like benign little trinkets. And because it’s hard to predict, we should move more slowly and carefully. And anybody who's confident it will go well or go poorly is overconfident. Things are too uncertain to go full speed ahead. Don't move fast and break things if the "things" in question could be all life on earth.
0 Comments
The AIs Will Only Do Good Fallacy.
You cannot think that:
California’s AI safety bill does not require kill switches for open source models.
People who are saying it does are either being misled or the ones doing the misleading. AIs under the control of the developer need a kill switch. Open source AIs are not under the control of the developers, so do not need a kill switch. Many of the people who are spreading the idea that it will kill open source know this and are spreading it anyways because they know that “open source” is an applause light for so many devs. Check the bill yourself. It's short and written in plain language: Or ask an AI to summarize it for you. The current AIs that aren't covered models and don't have the capacity to cause mass casualties so are great and won't be affected by this legislation. Gavin Newsom, please don't listen to corporate lobbyists who aren't even attacking the real bill, but an imagined boogeyman. Please don't veto a bill that's supported by the majority of Californians. The essential problem with AI safety: there will always be some people who are willing to roll the dice.
We need to figure out a way to convince people who have a reality distortion field around themselves to really get that superintelligent AI is not like the rest of reality. You can't just be high agency and gritty and resourceful. Just in the same way that no matter how virtuous and intelligent a cow gets, it can never beat the humans. We need to convince them to either change their minds, or we have to use the law and governments to protect the many from the reality distortion fields of the few. And I say this an entrepreneurial person who has more self-efficacy than might be good for me. But I use that self-efficacy to work on getting us more time to figure AI safety out. Even I don't have the arrogance to think that something vastly smarter and more powerful than me will care about what I want by default. AI corporations complained, got most of what they wanted, but they’re still shrieking about bill SB 1047 just as loudly as before.
Their objections aren’t the real objections. They just don’t want any government oversight. Remember: there's never been a movement in the history of the world where there wasn't lots of in-fighting and people disagreeing about strategy, including comms strategy.
People regularly accused Gandhi of being bad for the movement. People regularly accused MLK of being too radical and other people regularly accused him of not being radical enough. This is just the nature of movements and strategy. Once you understand this, you are free. Do what you think is highest impact. Other people will disagree. See if their suggestions or opinions are persuasive. If they are, update. If they aren't, carry on, and accept that there will always be critics and disagreement. |