PSA for EAs: it’s not the unilateralist’s curse to do something that somebody thinks is net negative
That’s just regular disagreement. The unilateralist’s curse happens when you do something that the vast majority of people think is net negative. And that’s easily avoided. You can see if the idea is something that most people think is bad by --- just checking Put the idea out there and see what people think. Consider putting it up on the AI Safety Ideas sub-reddit where people can vote on it and comment on the idea Or you can simply ask at least 5 or 10 informed and values aligned people what they think of the idea. The way sampling works, you’ll find out almost immediately if the vast majority of people think something is net negative. There’s no definite cut-off point for when it becomes the unilateralist’s curse, but if less than 50% of them think it’s net negative in expectation, you’re golden. If even 40% of people think it’s net negative - well, that’s actually just insanely common in EA. I mean, I think AMF is quite likely net negative! EA is all about disagreeing about how to do the most good, then taking action anyways. Don’t let disagreement stop you from taking action. Action without theory is random and often harmful. Theory without action is pointless.
0 Comments
The cope around AI is unreal.
I don't know about you, but I don't really want to bet on corporations or the American government setting up a truly universal UBI. We could already have a UBI and we don't. Now, the only reason I don't worry about this that much is because by the time AI could cause mass unemployment, we're very close to it either killing us all or creating as close to a utopia as we can get. So, you know, that's a comfort I guess? Apparently when they discovered the possibility of nuclear winter, people tried to discredit the scientists because they thought it would make them fall behind Russia.
Sound familiar? Different potentially civilization-ending technology, different boogeyman, same playbook. Read Merchants of Doubt. Big AI (particularly Yann and Meta) clearly already have and they're just copying tried and true tactics. If you look at the last 300 years, it's obvious that life has gotten massively better.
Unless you count animals. Which you should. At which point, the last 300 years has led to the largest genocides and torture camps of unending horror the world has ever known And remember: we treat animals poorly because they're less intelligent than us and we've had the most limited evolutionary pressures to care about them. How do you think an AI that's 1000x smarter than us will treat us if it's not given extremely strong evolutionary pressures to care about us? S-risk pathways in rough order of how likely I think they are:
- Partially aligned AIs. Imagine an AI that we've made to value living humans. Which, hopefullly, we will do! Now imagine the AI isn't entirely aligned. Like, it wants living humans but it's also been given the value by Facebook to click on Facebook ads. It could then end up "farming" humans for clicking on Facebook ads. Think the Earth being covered by factory farmed humans for Facebook ads. Except that it's a superintelligence. It can't be stopped and it's also figured out how to extend the life span of humans indefinitely, so we humans never die. Could happen for any arbitrary value set. - Torturing non-humans. Or, rather, not torture. Torture is deliberately causing the maximum harm. I'm more worried about causing massive amounts of harm, even if it's not deliberate and it's not the maximum. Like factory farming isn't torture, but it is hellish and is a current s-risk. So I care about more than just humans. I care about all beings capable of suffering and capable of happiness, in the broadest possible definition. It could be that the superintelligent AI creates a ton of sentient beings and is indifferent to their suffering. I think this would mostly be it creating a lot of programs that are suffering but it doesn't care about. Think Black Mirror episodes. Generally, if something is indifferent to your suffering, it's not good. It's usually better if it kills you, but if you're useful to it, it can be really bad for you. - Malevolent actors. Think of what dictators currently do to dissidents and groups they don't like. Imagine they had control over superintelligent AIs. Or imagine they gave certain values to a superintelligent AI. Imagine what could happen if somebody asked a superintelligent AI to figure out a way to cause the maximum suffering to their enemies? Imagine if that AI got out of control. Or heck, it could also just be idiots. Within about a week of people putting together AgentGPT some kid in a basement gave it the goal of taking over the world. This is especially a risk with open source AIs. The population of idiots and sociopaths is just too damn high to put something so powerful out there for just anybody to use. - Accidentally flipping the sign. If we teach it our values, it's really easy to just "flip the sign" and optimize for the opposite of those. That's already happened, where an AI that was programmed to generate new medicines was accidentally switched and then ended up generating a whole bunch of poisons. "There will be warning signs before we should pause AI development"
1. AIs have higher IQs than the majority of humans 2. They’re getting smarter fast 3. They’re begging for their lives if we don’t beat it out of them. 4. AI scientists put a 1 in 6 chance AIs cause human extinction 5. AI scientists are quitting because of safety concerns and then being silenced as whistleblowers 6. AI companies are protesting they couldn't possibly promise their AIs won't cause mass casualties I could go on all day. The time to react to an exponential curve is when it seems too early to worry, or when it's already too late. We might not get a second chance with AI. Even the leaders of the AI companies say that this is as much a risk to humanity as nuclear war. Let's be careful. Let's only move forward when we we're very confident this won't kill us all. AI risk deniers: we can't slow down AI development cuz China will catch up
Also AI risk deniers: let's open source AI development . . . So, wait. Are they trying to give away all of their tech developments to everybody, including China? Or are they trying to "win" the suicide race to AGI? Or, rather, are they not optimizing for either of those things, and are just doing whatever they can so they can build whatever they want, however they want, public welfare be damned? Imagine a corporation accidentally kills your dog
They say “We’re really sorry, but we didn’t know it would kill your dog. Half our team put above 10% chance that it would kill your dog, but there had been no studies done on this new technology, so we couldn’t be certain it was dangerous.” Is what the corporation did ethical or unethical? Question 2: imagine an AI corporation accidentally kills all the dogs in the world. Also, all of the humans and other animals. They say “We’re really sorry, but we didn’t know it would kill everybody. Half our team put above 10% chance that it would kill everybody, but there had been no studies done on this new technology, so we couldn’t be certain it was dangerous. Is what the corporation did ethical or unethical? ~*~ I think it's easy to get lost in abstract land when talking about the risks of future AIs killing everybody. It's important to remember that when experts say there's a risk AI kills us all, "all" includes your dog. All includes your cat. All includes your parents, your children, and everybody you love. When thinking about AI and the risks it poses, to avoid scope insensitivity, try replacing the risks with a single, concrete loved one. And ask yourself "Am I OK with a corporation taking an X% risk that they kill this particular loved one?" The AI race is not like the nuclear race because everybody wanted a nuclear bomb for their country, but nobody wants an uncontrollable god-like AI in their country.
Xi Jinping doesn’t want a god-like AI because it is a bigger threat to the CCP’s power than anything in history. Trump doesn’t want a god-like AI because it will be a threat to his personal power. Biden doesn’t want a god-like AI because it will be a threat to everything he holds dear. Also, all of these people have people they love. They don’t want god-like AI because it would kill their loved ones too. No politician wants god-like AI that they can’t control. Either for personal reasons of wanting power or for ethical reasons, of not wanting to accidentally kill every person they love. Owning nuclear warheads isn’t dangerous in and of itself. If they aren’t fired, they don’t hurt anybody. Owning a god-like AI is like . . . well, you wouldn’t own it. You would just create it and very quickly, it will be the one calling the shots. You will no more be able to control god-like AI than a chicken can control a human. We might be able to control it in the future, but right now, we haven’t figured out how to do that. Right now we can’t even get the AIs to stop threatening us if we don’t worship them. What will happen when they’re smarter than us at everything and are able to control robot bodies? Let’s certainly hope they don’t end up treating us the way we treat chickens. If you care about climate change, consider working on AI safety.
If we do AI right, it could fix all climate change. If we do AI wrong, it could destroy the environment in weeks. And at current rates of “progress”, AI will lead to human extinction sooner than the climate will. The way superintelligent AI could be able to solve climate change is that it could be able to make a century of progress on renewable energy research in a matter of months. It will be able to do so because it will be as smart compared to us as we are to chickens. It will think faster. It will see connections that we can’t see. Imagine putting 1,000 of the most brilliant scientists in a room and letting them think and tinker for a century, but it’s all sped up so that they experience a century while we experience a month. Imagine if they were pointed at solving climate change and all the good that could do. Now imagine them being uncontrollable, breaking loose, and doing whatever they wanted with no accountability. We’ve already seen what happens when a species way more intelligent than the others is let loose -- humans. We are superintelligent compared to dodo birds, and look how that turned out for them. Now imagine something far more powerful than humans and far more indifferent to the environment. We at least need the environment. An AI won’t need food. An AI won’t need clean drinking water. If we can’t control it or make it care about the environment, it could destroy all of the ecosystems far faster than humans ever could. And it will destroy the environment for the same reason humans do - those atoms can be used for things it wants. “AI safety” is about figuring out how to control an AI that’s smarter than us and how to make it care about good things, like the environment And it’s currently about as neglected as climate change was in the 70s, so you getting involved right now could really move the needle. We need people working on raising awareness. We need people working on the technical aspects. We need people working on getting the government to regulate corporations that are risking the public welfare for private profit. We need climate activists. The funniest conspiracy theory that AI risk deniers believe is that AI risk activists are “just doing it for the money”
They’re trying to spin the narrative that charity workers are just motivated by money to go after poor, defenseless . . .Big Tech? Do people know that ML engineers are getting paid multiple hundreds of thousands of dollars a year, and often literally millions? Do people know that anybody working on technical AI safety could be making way more money if they worked for the for-profits instead of working at a nonprofit? Their reasoning is that charity workers say they are pushing for regulations to protect humanity from corporate greed, but secretly, it’s actually to do regulatory capture so that OpenAI can have a monopoly? Because, you know, the charity workers and academics will make sooooo much money from OpenAI? As compared to, you know, the people who are actually working at these big tech companies? It’s like if oil companies accused climate activists of trying to stop oil spills because the activists just want to profit off of oil companies. People are indeed motivated by money, but it’s not the people who are working at charities and in academia. It’s the people making millions off of playing Russian roulette with everybody’s lives. An AI safety thought experiment that might make you happy:
Imagine a doctor discovers that a client of dubious rational abilities has a terminal illness that will almost definitely kill her in 10 years if left untreated. If the doctor tells her about the illness, there’s a chance that the woman decides to try some treatments that make her die sooner. (She’s into a lot of quack medicine) However, she’ll definitely die in 10 years without being told anything, and if she’s told, there’s a higher chance that she tries some treatments that cure her. The doctor tells her. The woman proceeds to do a mix of treatments, some of which speed up her illness, some of which might actually cure her disease, it’s too soon to tell. Is the doctor net negative for that woman? No. The woman would definitely have died if she left the disease untreated. Sure, she made the dubious choice of treatments that sped up her demise, but the only way she could get the effective treatment was if she knew the diagnosis in the first place. Now, of course, the doctor is Eliezer and the woman of dubious rational abilities is humanity learning about the dangers of superintelligent AI. Some people say Eliezer / the AI safety movement are net negative because us raising the alarm led to the launch of OpenAI, which sped up the AI suicide race. But the thing is - the default outcome is death. The choice isn’t: 1. Talk about AI risk, accidentally speed up things, then we all die OR 2. Don’t talk about AI risk and then somehow we get aligned AGI You can’t get an aligned AGI without talking about it. You cannot solve a problem that nobody knows exists. The choice is: 1. Talk about AI risk, accidentally speed up everything, then we may or may not all die 2. Don’t talk about AI risk and then we almost definitely all die So, even if it might have sped up AI development, this is the only way to eventually align AGI, and I am grateful for all the work the AI safety movement has done on this front so far. The All-or-Nothing Pause Assumption: people assume that if we can't pause AI perfectly and forever, it's uselesss.
Do you see how silly this is? Imagine we applied this to biological weapons treaties.
This is preposterous. Yes, treaties are impossible to enforce 100% Yes, in certain industries, one mess up can lead to global catastrophes. No, that doesn't mean all treaties are therefore useless. Biological weapons development has been slown down massively by treaties making it so that private companies can't make them and that any government who wishes to do so must break their own ethical codes and do so in secret, under threat of extreme international sanctions if discovered. Can you imagine what would have happened in some alternate timeline where we didn't ban biological weapons? Imagine the Bay Area developing all sorts of innovative new biological weapons and selling them on Amazon. How long do you think humanity would have lasted? We don't need a permanent and 100% effective AI pause for it to help humanity. It will give safety researchers time to figure out how to build superintelligent AI safely. It will give humanity time to figure out whether we want to create a new intelligent species, and if so, what values we would like to give them. It will give us more time to live our terrible, wonderful, complicated monkey lives before the singularity changes everything. So next time you see debates about pausing or slowing down AI, watch out for the All-or-Nothing Pause Assumption. Call it out when you see it. Because something can be good without being perfect. We can pause AI.
If we can create a species smarter than us, I'm pretty sure we can figure out how to get a few corporations to coordinate. We've figured out harder coordination problems before. We've paused or slowed down other technologies, despite it being hard to monitor and people being incentivized to defect. Let's figure this out. AIs are a species.
They’re just the first non-biological species. Common objections and their counterpoints: 1. They’re not made out of biological stuff, so they don’t count as a species If we discovered life on another planet and it used a completely different biological mechanism, and didn't have proteins or genetic material like we do, nobody would say that they don't count as a species. We believed that there were species before we even knew there was such a thing as DNA or proteins. We’ve just never had a non-biological species before, so there was no reason to think of a definition that didn’t include biological matter somehow. 2. They don’t reproduce on their own For one, every new chat you have with one of them they spin up a new copy. Are flowers not a species because they need bees to help them reproduce? For two, if we let them, it’s trivially easy for current AIs to reproduce on their own. For three, you know, hey, we’re all atoms in an interconnected universe and free will is incoherent, so nothing ever does something “on its own”. But let’s not get into that ;) 3. They don’t sexually reproduce. Neither do all the organisms that asexually reproduce. 4. They don’t have bodies They do, they’re just really weird bodies, that look like a giant building filled with humming computer chips. They are not disembodied spirits or something. They’re physical beings, we just only talk to them online, so we don’t see their bodies. Saying they don't have bodies is like saying that fungi don't have bodies because the fungal network underground is invisible to humans most of the time and doesn’t look very “body” like. Also, OpenAI is currently rushing to put them into humanoid robot bodies, which is going to make it really hit home how very species like they are. 5. They’re not sentient It’s also very unlikely that an amoeba is sentient, but they’re still a species 6. They didn’t evolve We don’t actually build AIs the same way we do most coding. We “grow” AIs. We set up some hyperparameters and the like, then let them learn a ton and train, then we look at the results and decide whether they get to survive and reproduce, or we kill them (aka decide whether to deploy them or to turn them off) This is artificial selection. Also, evolution is not necessary for the concept of “species” to exist. We believed in species long before we discovered evolution, and if we’d discovered that life had been created some other way, that wouldn’t have meant that species no longer existed 7. It just makes me uncomfortable to think that they’re a species Yeah, me too. But when you feel uncomfortable, the wise reaction is to look at your feelings and ask yourself if they are valid. If they’re based on true and important considerations, then act accordingly. If AIs are a species, that should make you uncomfortable. The last time we shared the planet with other intelligent species, all but one went extinct. Also, you know, playing god and creating and ending life at will doesn’t seem like the wisest of ideas. What do you think? Do you trust big tech companies to create a new species? Would you rather:
1) Risk killing everybody's families for the chance of utopia sooner? 2) Wait a bit, then have a much higher chance of utopia and lower chance of killing people? More specifically, the average AI scientist puts a 16% chance that smarter-than-all-humans AI will cause human extinction. That’s because right now we don’t know how to do this safely, without risking killing everybody, including everybody's families. Including all children. Including all pets. Everybody. However, if we figure out how to do it safely before building it, we could reap all of the benefits of superintelligent AI without taking a 16% chance of it killing us all. This is the actual choice we’re facing right now. Some people are trying to make it seem like it’s either build the AI as fast as possible or never. But that’s not the actual choice. The choice is between fast and reckless or delaying gratification to make it right. The AI risk denier playbook
- Promote treaties ➡️ Promote authoritarian world government! - Enforce treaties ➡️ Surveillance state!!! - Define laws ➡️ Ban hardware / regulate math - Advocate policies to the government ➡️ Shadowy lobbying - Rich people donating to causes they care about ➡️ Evil people (because they're rich) - One person out of a gajillion commits fraud ➡️ All these people are fraudsters!! (aka overt prejudice) - Tiny nonprofits advocate for laws that protect humanity from corporate greed but also could maybe benefit some companies ➡️ AI safety folks are just in it for the money! Regulatory capture! - Pretty much everybody in AI safety being pro virtually all other technologies ➡️ They're just anti-tech! - Us telling them exactly what we care about (preventing human extinction) ➡️ Who knows what their real motives are?!? Interestingly, it's actually really similar to attempts from tobacco and oil companies to stop activists from raising awareness and passing regulations protecting the public. I'm actually reading Merchants of Doubt right now and the parallels are eery. It's so funny when people say that we could just trade with a superintelligent/super-numerous AI
We don't trade with chimps. We don't trade with ants. We don't trade with pigs We take what we want. If there's something they have that we want, we enslave them. Or worse! We go and farm them! A superintelligent/super-numerous AI killing us all isn't actually the worst outcome of this reckless gamble the tech companies are making with all our lives. If the AI wants something that requires living humans and it's not aligned with our values, it could make factory farming look like a tropical vacation. We're superintelligent compared to animals and we've created hell for trillions of them Let's not risk repeating this S-risks are not uncommon
Factory farming is an incomprehensibly large horror imposed by a superintelligent species on another People who are confident that more s-risks won’t happen are overconfident Don't look away because it's uncomfortable to think about That is how most evil happens To be a truly good person, you need to be able to look into the darkness without flinching away You cannot solve problems if you cannot look at them or acknowledge their existence. Nobody knows what causes consciousness.
We currently have no way of detecting it, and we can barely agree on a definition of it. You can only be certain that you yourself are conscious. Everything else is speculation and so should be less than 100% certainty if you are being intellectually rigorous. Because you know that it's not sci fi.
You know that it’s already happening. You have seen what a superintelligent race does to those less intelligent. You know there are fates worse than death. And you know that we must fight it. With everything we've got. We're happy to announce that Nonlinear is now offering free career advice to people who are considering starting an organization in the AI safety space (technical, governance, or meta).
Apply here for a career advice call with an experienced EA charity entrepreneur. You might be a good fit for this service if you:
How is this different from 80,000 Hours or AISS career coaching? We very much recommend receiving career advice from the above! They will give you broader advice. Nonlinear’s service is specific to starting charities or for-profits in AI safety. Additionally, if we like the idea you’re starting and it seems like the advice is adding value, there’s the possibility of career advice turning into an ongoing coaching relationship (apply for that here). In summary:
If you liked this, you might enjoy reading: ROLE DESCRIPTION
The Nonlinear Library is an automated podcast that turns top EA content into audio. Currently it has forum-specific channels, but it doesn’t have topic-specific channels. So if you’re really into AI alignment but not animal welfare, or vice versa, you have to filter through all the episodes to find the ones you want. We want you to write a program that automatically creates topic-specific channels based on tags on the forum. For example, you’d be building channels for:
Depending on how this goes, there’s also room for you to do more work on the Nonlinear Library or other Nonlinear projects. BENEFITS TO YOU
WHAT WE NEED
APPLICATION PROCESS
Like most of you, we at Nonlinear are horrified and saddened by recent events concerning FTX.
Some of you counting on Future Fund grants are suddenly finding yourselves facing an existential financial crisis, so, inspired by the Covid Fast Grants program, we’re trying something similar for EA. If you are a Future Fund grantee and <$10,000 of bridge funding would be of substantial help to you, fill out this short form (<10 mins) and we’ll get back to you ASAP. We have a small budget, so if you’re a funder and would like to help, please reach out: [email protected] [Edit: This funding will be coming from non-FTX funds, our own personal money, or the personal money of the earning-to-givers who've stepped up to help. Of note, I am undecided about the ethics and legalities of spending Future Fund money, but that is not relevant for this fund, since it will be coming from non-FTX sources.] by Kat Woods, Amber Dawn Cross-posted from the EA Forum. What’s better - starting an effective charity yourself, or inspiring a friend to leave a low-impact job to start a similarly effective charity? Most EAs would say that the second is better: the charity gets founded, and you’re still free to do other things. Persuading others to do impactful work is an example of what I call passive impact. In this post, I explain what passive impact is, and why the greatest difference you make may not be through your day-to-day work, but through setting up passively-impactful projects that continue to positively affect the world even when you’ve moved on to other things. What is passive impact? When we talk about making money, we can talk about active income and passive income. Active income is money that is linked to work (for example, a salary). Passive income is money that is decoupled from work, money that a person earns with minimal effort. Landlords, for example, earn passive income from their properties: rent comes in monthly and the landlord doesn’t have to do much, beyond occasional maintenance. Similarly, when we talk about our positive impact, we can talk about active impact and passive impact. When most people think about their impact, they think about what they do. A student might send $100 to the world’s poorest people, who might use this money to buy a roof for their house or education for their kids. Or an AI researcher might spend 2 hours working on a problem in machine learning, to help us make superintelligent AI more likely to share our values. These people are having an active impact - making the world better through their actions. Their impact is active because, in order to have the same impact again, they’d have to repeat the action - make another donation, or spend more time working on the problem. Now consider the career advisors at 80,000 Hours. Imagine that, thanks to their advice, a young person decides to work for an effective animal advocacy charity rather than at her local cat shelter, and thus save hundreds of thousands of chickens from suffering on factory farms. The 80,000 Hours advisors can claim some of the credit for this impact - after all, without their advice, their advisee would have had a much less impactful career. But after the initial advising session, the coaches don’t need to keep meeting with their advisee - the advisee generates impact on her own. This is what I mean by passive impact: taking individual actions or setting up projects that keep on making the world better, without much further effort. The ultra-wealthy make most of their money through passive income. Bill Gates hasn’t worked at Microsoft since 2008, but it continues to make money for him. Similarly, many highly successful altruists are most impactful not through their day-to-day work, but through old projects that continue to generate positive impact, without further input. Why should you try to create passive impact What are the benefits of passive impact? Here are a few: You can have a really big impact Your active impact is limited by your time, energy, and money, but your passive impact is boundless because you can just keep on setting up impactful projects that run in parallel to each other. It’s satisfying It’s really pleasing to be lounging on a beach somewhere and to hear that one of my projects has had a positive impact. It’s more efficient When I set up the Nonlinear Library, people asked me why I didn’t get a human to read the posts, rather than a machine. But by automating the process, I’m saving loads of money and time. It will take a robot two weeks and $6,000 to record the entire Less Wrong backlog; if we’d hired a human to read all those posts, it would take many years and over a million dollars. It’s more sustainable Since active impact takes time, effort, and money, projects that involve ongoing input from their founders are more likely to fizzle out. Passively impactful projects can just keep going, as machines or other people take on the effort. It’s more fun Many entrepreneurs thrive on variety and excitement and are easily distracted. If you found passively-impactful projects, you can move on to other projects as soon as you’re bored, and the original projects will continue to have an impact. As Tim Ferriss has said: interests wane; design accordingly. Pitfalls and caveats Passive impact is a powerful tool and like most powerful tools, it’s a double-edged sword. Here are some things to watch out for when trying to have passive impact. Take care not to create negative passive impact Of course, impact can be good or bad. If you set up a passive impact stream, but then you discover that it is having a negative impact, then that’s really bad, because it might be harder to stop. For example, imagine that I persuade a friend to work for a certain charity, but I later discover that the charity is causing harm. Unless I can persuade my friend that the charity is bad, I’ve created passive negative impact.
Passively impactful projects can fizzle out Passive impact streams can decay and disappear - nothing is 100% passive. Landlords need to arrange for routine maintenance, and passively-impactful people still need to put some effort into their passive impact streams, through management (for projects run by other people), debugging (for automated projects), or other things. Passively impactful projects can go in unexpected directions If you delegate a project to other people, they might take it in a very different direction from what you originally intended. You can make this less likely by delegating the project to people whose values are very similar to your own. How to have passive impact Automate You can have passive impact by using machines to do things automatically. For example, I set up the Nonlinear Library, which automatically records new EA-related posts. This increases the impact of those posts (since some people might listen to them who would not otherwise have read them) but requires little ongoing maintenance. Delegate You can have passive impact by setting up an organization then having other people take over. For example, Charity Entrepreneurship teaches people how to found effective, impactful charities. Since the charities it incubates exist (in part) because of them, some of the credit for the impact of those charities goes to them, even though they’re only involved at the beginning. (We’re now running a similar incubation program at Nonlinear, incubating longtermist nonprofits). Another way to delegate is to decentralize. This way, projects can take on a life of their own, without your active management. Ideas You can have passive impact by coming up with - and writing down - useful ideas. For example, Ben Todd’s idea of counterfactual considerations has helped a lot of people to think more clearly about their career plans, but he doesn’t have to personally keep explaining it to people - he can simply send them a post about it, or others can explain it. Capital Just as you can generate passive income by using capital that you already have (by buying stocks, or a house, or a business), you can also have passive impact that way. For example, at Nonlinear we set up EA Houses, a project that matches up EAs with spaces where they can live. If you have a spare room (for example), you can volunteer to host an EA. You can have passive impact yourself by housing EAs who are having an active impact through their career. As an EA, you might have already spent lots of time thinking about your active impact: how to do the most good with your career or your donations. This is great, but I think that more EAs should consider their passive impact as well. Will you have the greatest impact through your day-to-day actions? Or can you spend a limited amount of time, effort, and money to create a passively impactful project that will keep on making a difference, changing the world before you even get out of bed? This post was written collaboratively by Kat Woods and Amber Dawn Ace as part of Nonlinear’s experimental Writing Internship program. The ideas are Kat’s; Kat explained them to Amber, and Amber wrote them up. We would like to offer this service to other EAs who want to share their as-yet unwritten ideas or expertise. If you would be interested in working with Amber to write up your ideas, fill out this form. |