Imagine a corporation accidentally kills your dog
They say “We’re really sorry, but we didn’t know it would kill your dog. Half our team put above 10% chance that it would kill your dog, but there had been no studies done on this new technology, so we couldn’t be certain it was dangerous.” Is what the corporation did ethical or unethical? Question 2: imagine an AI corporation accidentally kills all the dogs in the world. Also, all of the humans and other animals. They say “We’re really sorry, but we didn’t know it would kill everybody. Half our team put above 10% chance that it would kill everybody, but there had been no studies done on this new technology, so we couldn’t be certain it was dangerous. Is what the corporation did ethical or unethical? ~*~ I think it's easy to get lost in abstract land when talking about the risks of future AIs killing everybody. It's important to remember that when experts say there's a risk AI kills us all, "all" includes your dog. All includes your cat. All includes your parents, your children, and everybody you love. When thinking about AI and the risks it poses, to avoid scope insensitivity, try replacing the risks with a single, concrete loved one. And ask yourself "Am I OK with a corporation taking an X% risk that they kill this particular loved one?"
0 Comments
The AI race is not like the nuclear race because everybody wanted a nuclear bomb for their country, but nobody wants an uncontrollable god-like AI in their country.
Xi Jinping doesn’t want a god-like AI because it is a bigger threat to the CCP’s power than anything in history. Trump doesn’t want a god-like AI because it will be a threat to his personal power. Biden doesn’t want a god-like AI because it will be a threat to everything he holds dear. Also, all of these people have people they love. They don’t want god-like AI because it would kill their loved ones too. No politician wants god-like AI that they can’t control. Either for personal reasons of wanting power or for ethical reasons, of not wanting to accidentally kill every person they love. Owning nuclear warheads isn’t dangerous in and of itself. If they aren’t fired, they don’t hurt anybody. Owning a god-like AI is like . . . well, you wouldn’t own it. You would just create it and very quickly, it will be the one calling the shots. You will no more be able to control god-like AI than a chicken can control a human. We might be able to control it in the future, but right now, we haven’t figured out how to do that. Right now we can’t even get the AIs to stop threatening us if we don’t worship them. What will happen when they’re smarter than us at everything and are able to control robot bodies? Let’s certainly hope they don’t end up treating us the way we treat chickens. If you care about climate change, consider working on AI safety.
If we do AI right, it could fix all climate change. If we do AI wrong, it could destroy the environment in weeks. And at current rates of “progress”, AI will lead to human extinction sooner than the climate will. The way superintelligent AI could be able to solve climate change is that it could be able to make a century of progress on renewable energy research in a matter of months. It will be able to do so because it will be as smart compared to us as we are to chickens. It will think faster. It will see connections that we can’t see. Imagine putting 1,000 of the most brilliant scientists in a room and letting them think and tinker for a century, but it’s all sped up so that they experience a century while we experience a month. Imagine if they were pointed at solving climate change and all the good that could do. Now imagine them being uncontrollable, breaking loose, and doing whatever they wanted with no accountability. We’ve already seen what happens when a species way more intelligent than the others is let loose -- humans. We are superintelligent compared to dodo birds, and look how that turned out for them. Now imagine something far more powerful than humans and far more indifferent to the environment. We at least need the environment. An AI won’t need food. An AI won’t need clean drinking water. If we can’t control it or make it care about the environment, it could destroy all of the ecosystems far faster than humans ever could. And it will destroy the environment for the same reason humans do - those atoms can be used for things it wants. “AI safety” is about figuring out how to control an AI that’s smarter than us and how to make it care about good things, like the environment And it’s currently about as neglected as climate change was in the 70s, so you getting involved right now could really move the needle. We need people working on raising awareness. We need people working on the technical aspects. We need people working on getting the government to regulate corporations that are risking the public welfare for private profit. We need climate activists. The funniest conspiracy theory that AI risk deniers believe is that AI risk activists are “just doing it for the money”
They’re trying to spin the narrative that charity workers are just motivated by money to go after poor, defenseless . . .Big Tech? Do people know that ML engineers are getting paid multiple hundreds of thousands of dollars a year, and often literally millions? Do people know that anybody working on technical AI safety could be making way more money if they worked for the for-profits instead of working at a nonprofit? Their reasoning is that charity workers say they are pushing for regulations to protect humanity from corporate greed, but secretly, it’s actually to do regulatory capture so that OpenAI can have a monopoly? Because, you know, the charity workers and academics will make sooooo much money from OpenAI? As compared to, you know, the people who are actually working at these big tech companies? It’s like if oil companies accused climate activists of trying to stop oil spills because the activists just want to profit off of oil companies. People are indeed motivated by money, but it’s not the people who are working at charities and in academia. It’s the people making millions off of playing Russian roulette with everybody’s lives. An AI safety thought experiment that might make you happy:
Imagine a doctor discovers that a client of dubious rational abilities has a terminal illness that will almost definitely kill her in 10 years if left untreated. If the doctor tells her about the illness, there’s a chance that the woman decides to try some treatments that make her die sooner. (She’s into a lot of quack medicine) However, she’ll definitely die in 10 years without being told anything, and if she’s told, there’s a higher chance that she tries some treatments that cure her. The doctor tells her. The woman proceeds to do a mix of treatments, some of which speed up her illness, some of which might actually cure her disease, it’s too soon to tell. Is the doctor net negative for that woman? No. The woman would definitely have died if she left the disease untreated. Sure, she made the dubious choice of treatments that sped up her demise, but the only way she could get the effective treatment was if she knew the diagnosis in the first place. Now, of course, the doctor is Eliezer and the woman of dubious rational abilities is humanity learning about the dangers of superintelligent AI. Some people say Eliezer / the AI safety movement are net negative because us raising the alarm led to the launch of OpenAI, which sped up the AI suicide race. But the thing is - the default outcome is death. The choice isn’t: 1. Talk about AI risk, accidentally speed up things, then we all die OR 2. Don’t talk about AI risk and then somehow we get aligned AGI You can’t get an aligned AGI without talking about it. You cannot solve a problem that nobody knows exists. The choice is: 1. Talk about AI risk, accidentally speed up everything, then we may or may not all die 2. Don’t talk about AI risk and then we almost definitely all die So, even if it might have sped up AI development, this is the only way to eventually align AGI, and I am grateful for all the work the AI safety movement has done on this front so far. The All-or-Nothing Pause Assumption: people assume that if we can't pause AI perfectly and forever, it's uselesss.
Do you see how silly this is? Imagine we applied this to biological weapons treaties.
This is preposterous. Yes, treaties are impossible to enforce 100% Yes, in certain industries, one mess up can lead to global catastrophes. No, that doesn't mean all treaties are therefore useless. Biological weapons development has been slown down massively by treaties making it so that private companies can't make them and that any government who wishes to do so must break their own ethical codes and do so in secret, under threat of extreme international sanctions if discovered. Can you imagine what would have happened in some alternate timeline where we didn't ban biological weapons? Imagine the Bay Area developing all sorts of innovative new biological weapons and selling them on Amazon. How long do you think humanity would have lasted? We don't need a permanent and 100% effective AI pause for it to help humanity. It will give safety researchers time to figure out how to build superintelligent AI safely. It will give humanity time to figure out whether we want to create a new intelligent species, and if so, what values we would like to give them. It will give us more time to live our terrible, wonderful, complicated monkey lives before the singularity changes everything. So next time you see debates about pausing or slowing down AI, watch out for the All-or-Nothing Pause Assumption. Call it out when you see it. Because something can be good without being perfect. We can pause AI.
If we can create a species smarter than us, I'm pretty sure we can figure out how to get a few corporations to coordinate. We've figured out harder coordination problems before. We've paused or slowed down other technologies, despite it being hard to monitor and people being incentivized to defect. Let's figure this out. AIs are a species.
They’re just the first non-biological species. Common objections and their counterpoints: 1. They’re not made out of biological stuff, so they don’t count as a species If we discovered life on another planet and it used a completely different biological mechanism, and didn't have proteins or genetic material like we do, nobody would say that they don't count as a species. We believed that there were species before we even knew there was such a thing as DNA or proteins. We’ve just never had a non-biological species before, so there was no reason to think of a definition that didn’t include biological matter somehow. 2. They don’t reproduce on their own For one, every new chat you have with one of them they spin up a new copy. Are flowers not a species because they need bees to help them reproduce? For two, if we let them, it’s trivially easy for current AIs to reproduce on their own. For three, you know, hey, we’re all atoms in an interconnected universe and free will is incoherent, so nothing ever does something “on its own”. But let’s not get into that ;) 3. They don’t sexually reproduce. Neither do all the organisms that asexually reproduce. 4. They don’t have bodies They do, they’re just really weird bodies, that look like a giant building filled with humming computer chips. They are not disembodied spirits or something. They’re physical beings, we just only talk to them online, so we don’t see their bodies. Saying they don't have bodies is like saying that fungi don't have bodies because the fungal network underground is invisible to humans most of the time and doesn’t look very “body” like. Also, OpenAI is currently rushing to put them into humanoid robot bodies, which is going to make it really hit home how very species like they are. 5. They’re not sentient It’s also very unlikely that an amoeba is sentient, but they’re still a species 6. They didn’t evolve We don’t actually build AIs the same way we do most coding. We “grow” AIs. We set up some hyperparameters and the like, then let them learn a ton and train, then we look at the results and decide whether they get to survive and reproduce, or we kill them (aka decide whether to deploy them or to turn them off) This is artificial selection. Also, evolution is not necessary for the concept of “species” to exist. We believed in species long before we discovered evolution, and if we’d discovered that life had been created some other way, that wouldn’t have meant that species no longer existed 7. It just makes me uncomfortable to think that they’re a species Yeah, me too. But when you feel uncomfortable, the wise reaction is to look at your feelings and ask yourself if they are valid. If they’re based on true and important considerations, then act accordingly. If AIs are a species, that should make you uncomfortable. The last time we shared the planet with other intelligent species, all but one went extinct. Also, you know, playing god and creating and ending life at will doesn’t seem like the wisest of ideas. What do you think? Do you trust big tech companies to create a new species? Would you rather:
1) Risk killing everybody's families for the chance of utopia sooner? 2) Wait a bit, then have a much higher chance of utopia and lower chance of killing people? More specifically, the average AI scientist puts a 16% chance that smarter-than-all-humans AI will cause human extinction. That’s because right now we don’t know how to do this safely, without risking killing everybody, including everybody's families. Including all children. Including all pets. Everybody. However, if we figure out how to do it safely before building it, we could reap all of the benefits of superintelligent AI without taking a 16% chance of it killing us all. This is the actual choice we’re facing right now. Some people are trying to make it seem like it’s either build the AI as fast as possible or never. But that’s not the actual choice. The choice is between fast and reckless or delaying gratification to make it right. The AI risk denier playbook
- Promote treaties ➡️ Promote authoritarian world government! - Enforce treaties ➡️ Surveillance state!!! - Define laws ➡️ Ban hardware / regulate math - Advocate policies to the government ➡️ Shadowy lobbying - Rich people donating to causes they care about ➡️ Evil people (because they're rich) - One person out of a gajillion commits fraud ➡️ All these people are fraudsters!! (aka overt prejudice) - Tiny nonprofits advocate for laws that protect humanity from corporate greed but also could maybe benefit some companies ➡️ AI safety folks are just in it for the money! Regulatory capture! - Pretty much everybody in AI safety being pro virtually all other technologies ➡️ They're just anti-tech! - Us telling them exactly what we care about (preventing human extinction) ➡️ Who knows what their real motives are?!? Interestingly, it's actually really similar to attempts from tobacco and oil companies to stop activists from raising awareness and passing regulations protecting the public. I'm actually reading Merchants of Doubt right now and the parallels are eery. It's so funny when people say that we could just trade with a superintelligent/super-numerous AI
We don't trade with chimps. We don't trade with ants. We don't trade with pigs We take what we want. If there's something they have that we want, we enslave them. Or worse! We go and farm them! A superintelligent/super-numerous AI killing us all isn't actually the worst outcome of this reckless gamble the tech companies are making with all our lives. If the AI wants something that requires living humans and it's not aligned with our values, it could make factory farming look like a tropical vacation. We're superintelligent compared to animals and we've created hell for trillions of them Let's not risk repeating this S-risks are not uncommon
Factory farming is an incomprehensibly large horror imposed by a superintelligent species on another People who are confident that more s-risks won’t happen are overconfident Don't look away because it's uncomfortable to think about That is how most evil happens To be a truly good person, you need to be able to look into the darkness without flinching away You cannot solve problems if you cannot look at them or acknowledge their existence. Nobody knows what causes consciousness.
We currently have no way of detecting it, and we can barely agree on a definition of it. You can only be certain that you yourself are conscious. Everything else is speculation and so should be less than 100% certainty if you are being intellectually rigorous. Because you know that it's not sci fi.
You know that it’s already happening. You have seen what a superintelligent race does to those less intelligent. You know there are fates worse than death. And you know that we must fight it. With everything we've got. |