We can pause AI.
If we can create a species smarter than us, I'm pretty sure we can figure out how to get a few corporations to coordinate. We've figured out harder coordination problems before. We've paused or slowed down other technologies, despite it being hard to monitor and people being incentivized to defect. Let's figure this out.
0 Comments
AIs are a species.
They’re just the first non-biological species. Common objections and their counterpoints: 1. They’re not made out of biological stuff, so they don’t count as a species If we discovered life on another planet and it used a completely different biological mechanism, and didn't have proteins or genetic material like we do, nobody would say that they don't count as a species. We believed that there were species before we even knew there was such a thing as DNA or proteins. We’ve just never had a non-biological species before, so there was no reason to think of a definition that didn’t include biological matter somehow. 2. They don’t reproduce on their own For one, every new chat you have with one of them they spin up a new copy. Are flowers not a species because they need bees to help them reproduce? For two, if we let them, it’s trivially easy for current AIs to reproduce on their own. For three, you know, hey, we’re all atoms in an interconnected universe and free will is incoherent, so nothing ever does something “on its own”. But let’s not get into that ;) 3. They don’t sexually reproduce. Neither do all the organisms that asexually reproduce. 4. They don’t have bodies They do, they’re just really weird bodies, that look like a giant building filled with humming computer chips. They are not disembodied spirits or something. They’re physical beings, we just only talk to them online, so we don’t see their bodies. Saying they don't have bodies is like saying that fungi don't have bodies because the fungal network underground is invisible to humans most of the time and doesn’t look very “body” like. Also, OpenAI is currently rushing to put them into humanoid robot bodies, which is going to make it really hit home how very species like they are. 5. They’re not sentient It’s also very unlikely that an amoeba is sentient, but they’re still a species 6. They didn’t evolve We don’t actually build AIs the same way we do most coding. We “grow” AIs. We set up some hyperparameters and the like, then let them learn a ton and train, then we look at the results and decide whether they get to survive and reproduce, or we kill them (aka decide whether to deploy them or to turn them off) This is artificial selection. Also, evolution is not necessary for the concept of “species” to exist. We believed in species long before we discovered evolution, and if we’d discovered that life had been created some other way, that wouldn’t have meant that species no longer existed 7. It just makes me uncomfortable to think that they’re a species Yeah, me too. But when you feel uncomfortable, the wise reaction is to look at your feelings and ask yourself if they are valid. If they’re based on true and important considerations, then act accordingly. If AIs are a species, that should make you uncomfortable. The last time we shared the planet with other intelligent species, all but one went extinct. Also, you know, playing god and creating and ending life at will doesn’t seem like the wisest of ideas. What do you think? Do you trust big tech companies to create a new species? Would you rather:
1) Risk killing everybody's families for the chance of utopia sooner? 2) Wait a bit, then have a much higher chance of utopia and lower chance of killing people? More specifically, the average AI scientist puts a 16% chance that smarter-than-all-humans AI will cause human extinction. That’s because right now we don’t know how to do this safely, without risking killing everybody, including everybody's families. Including all children. Including all pets. Everybody. However, if we figure out how to do it safely before building it, we could reap all of the benefits of superintelligent AI without taking a 16% chance of it killing us all. This is the actual choice we’re facing right now. Some people are trying to make it seem like it’s either build the AI as fast as possible or never. But that’s not the actual choice. The choice is between fast and reckless or delaying gratification to make it right. The AI risk denier playbook
- Promote treaties ➡️ Promote authoritarian world government! - Enforce treaties ➡️ Surveillance state!!! - Define laws ➡️ Ban hardware / regulate math - Advocate policies to the government ➡️ Shadowy lobbying - Rich people donating to causes they care about ➡️ Evil people (because they're rich) - One person out of a gajillion commits fraud ➡️ All these people are fraudsters!! (aka overt prejudice) - Tiny nonprofits advocate for laws that protect humanity from corporate greed but also could maybe benefit some companies ➡️ AI safety folks are just in it for the money! Regulatory capture! - Pretty much everybody in AI safety being pro virtually all other technologies ➡️ They're just anti-tech! - Us telling them exactly what we care about (preventing human extinction) ➡️ Who knows what their real motives are?!? Interestingly, it's actually really similar to attempts from tobacco and oil companies to stop activists from raising awareness and passing regulations protecting the public. I'm actually reading Merchants of Doubt right now and the parallels are eery. It's so funny when people say that we could just trade with a superintelligent/super-numerous AI
We don't trade with chimps. We don't trade with ants. We don't trade with pigs We take what we want. If there's something they have that we want, we enslave them. Or worse! We go and farm them! A superintelligent/super-numerous AI killing us all isn't actually the worst outcome of this reckless gamble the tech companies are making with all our lives. If the AI wants something that requires living humans and it's not aligned with our values, it could make factory farming look like a tropical vacation. We're superintelligent compared to animals and we've created hell for trillions of them Let's not risk repeating this S-risks are not uncommon
Factory farming is an incomprehensibly large horror imposed by a superintelligent species on another People who are confident that more s-risks won’t happen are overconfident Don't look away because it's uncomfortable to think about That is how most evil happens To be a truly good person, you need to be able to look into the darkness without flinching away You cannot solve problems if you cannot look at them or acknowledge their existence. Nobody knows what causes consciousness.
We currently have no way of detecting it, and we can barely agree on a definition of it. You can only be certain that you yourself are conscious. Everything else is speculation and so should be less than 100% certainty if you are being intellectually rigorous. Because you know that it's not sci fi.
You know that it’s already happening. You have seen what a superintelligent race does to those less intelligent. You know there are fates worse than death. And you know that we must fight it. With everything we've got. We're happy to announce that Nonlinear is now offering free career advice to people who are considering starting an organization in the AI safety space (technical, governance, or meta).
Apply here for a career advice call with an experienced EA charity entrepreneur. You might be a good fit for this service if you:
How is this different from 80,000 Hours or AISS career coaching? We very much recommend receiving career advice from the above! They will give you broader advice. Nonlinear’s service is specific to starting charities or for-profits in AI safety. Additionally, if we like the idea you’re starting and it seems like the advice is adding value, there’s the possibility of career advice turning into an ongoing coaching relationship (apply for that here). In summary:
If you liked this, you might enjoy reading: ROLE DESCRIPTION
The Nonlinear Library is an automated podcast that turns top EA content into audio. Currently it has forum-specific channels, but it doesn’t have topic-specific channels. So if you’re really into AI alignment but not animal welfare, or vice versa, you have to filter through all the episodes to find the ones you want. We want you to write a program that automatically creates topic-specific channels based on tags on the forum. For example, you’d be building channels for:
Depending on how this goes, there’s also room for you to do more work on the Nonlinear Library or other Nonlinear projects. BENEFITS TO YOU
WHAT WE NEED
APPLICATION PROCESS
Like most of you, we at Nonlinear are horrified and saddened by recent events concerning FTX.
Some of you counting on Future Fund grants are suddenly finding yourselves facing an existential financial crisis, so, inspired by the Covid Fast Grants program, we’re trying something similar for EA. If you are a Future Fund grantee and <$10,000 of bridge funding would be of substantial help to you, fill out this short form (<10 mins) and we’ll get back to you ASAP. We have a small budget, so if you’re a funder and would like to help, please reach out: [email protected] [Edit: This funding will be coming from non-FTX funds, our own personal money, or the personal money of the earning-to-givers who've stepped up to help. Of note, I am undecided about the ethics and legalities of spending Future Fund money, but that is not relevant for this fund, since it will be coming from non-FTX sources.] by Kat Woods, Amber Dawn Cross-posted from the EA Forum. What’s better - starting an effective charity yourself, or inspiring a friend to leave a low-impact job to start a similarly effective charity? Most EAs would say that the second is better: the charity gets founded, and you’re still free to do other things. Persuading others to do impactful work is an example of what I call passive impact. In this post, I explain what passive impact is, and why the greatest difference you make may not be through your day-to-day work, but through setting up passively-impactful projects that continue to positively affect the world even when you’ve moved on to other things. What is passive impact? When we talk about making money, we can talk about active income and passive income. Active income is money that is linked to work (for example, a salary). Passive income is money that is decoupled from work, money that a person earns with minimal effort. Landlords, for example, earn passive income from their properties: rent comes in monthly and the landlord doesn’t have to do much, beyond occasional maintenance. Similarly, when we talk about our positive impact, we can talk about active impact and passive impact. When most people think about their impact, they think about what they do. A student might send $100 to the world’s poorest people, who might use this money to buy a roof for their house or education for their kids. Or an AI researcher might spend 2 hours working on a problem in machine learning, to help us make superintelligent AI more likely to share our values. These people are having an active impact - making the world better through their actions. Their impact is active because, in order to have the same impact again, they’d have to repeat the action - make another donation, or spend more time working on the problem. Now consider the career advisors at 80,000 Hours. Imagine that, thanks to their advice, a young person decides to work for an effective animal advocacy charity rather than at her local cat shelter, and thus save hundreds of thousands of chickens from suffering on factory farms. The 80,000 Hours advisors can claim some of the credit for this impact - after all, without their advice, their advisee would have had a much less impactful career. But after the initial advising session, the coaches don’t need to keep meeting with their advisee - the advisee generates impact on her own. This is what I mean by passive impact: taking individual actions or setting up projects that keep on making the world better, without much further effort. The ultra-wealthy make most of their money through passive income. Bill Gates hasn’t worked at Microsoft since 2008, but it continues to make money for him. Similarly, many highly successful altruists are most impactful not through their day-to-day work, but through old projects that continue to generate positive impact, without further input. Why should you try to create passive impact What are the benefits of passive impact? Here are a few: You can have a really big impact Your active impact is limited by your time, energy, and money, but your passive impact is boundless because you can just keep on setting up impactful projects that run in parallel to each other. It’s satisfying It’s really pleasing to be lounging on a beach somewhere and to hear that one of my projects has had a positive impact. It’s more efficient When I set up the Nonlinear Library, people asked me why I didn’t get a human to read the posts, rather than a machine. But by automating the process, I’m saving loads of money and time. It will take a robot two weeks and $6,000 to record the entire Less Wrong backlog; if we’d hired a human to read all those posts, it would take many years and over a million dollars. It’s more sustainable Since active impact takes time, effort, and money, projects that involve ongoing input from their founders are more likely to fizzle out. Passively impactful projects can just keep going, as machines or other people take on the effort. It’s more fun Many entrepreneurs thrive on variety and excitement and are easily distracted. If you found passively-impactful projects, you can move on to other projects as soon as you’re bored, and the original projects will continue to have an impact. As Tim Ferriss has said: interests wane; design accordingly. Pitfalls and caveats Passive impact is a powerful tool and like most powerful tools, it’s a double-edged sword. Here are some things to watch out for when trying to have passive impact. Take care not to create negative passive impact Of course, impact can be good or bad. If you set up a passive impact stream, but then you discover that it is having a negative impact, then that’s really bad, because it might be harder to stop. For example, imagine that I persuade a friend to work for a certain charity, but I later discover that the charity is causing harm. Unless I can persuade my friend that the charity is bad, I’ve created passive negative impact.
Passively impactful projects can fizzle out Passive impact streams can decay and disappear - nothing is 100% passive. Landlords need to arrange for routine maintenance, and passively-impactful people still need to put some effort into their passive impact streams, through management (for projects run by other people), debugging (for automated projects), or other things. Passively impactful projects can go in unexpected directions If you delegate a project to other people, they might take it in a very different direction from what you originally intended. You can make this less likely by delegating the project to people whose values are very similar to your own. How to have passive impact Automate You can have passive impact by using machines to do things automatically. For example, I set up the Nonlinear Library, which automatically records new EA-related posts. This increases the impact of those posts (since some people might listen to them who would not otherwise have read them) but requires little ongoing maintenance. Delegate You can have passive impact by setting up an organization then having other people take over. For example, Charity Entrepreneurship teaches people how to found effective, impactful charities. Since the charities it incubates exist (in part) because of them, some of the credit for the impact of those charities goes to them, even though they’re only involved at the beginning. (We’re now running a similar incubation program at Nonlinear, incubating longtermist nonprofits). Another way to delegate is to decentralize. This way, projects can take on a life of their own, without your active management. Ideas You can have passive impact by coming up with - and writing down - useful ideas. For example, Ben Todd’s idea of counterfactual considerations has helped a lot of people to think more clearly about their career plans, but he doesn’t have to personally keep explaining it to people - he can simply send them a post about it, or others can explain it. Capital Just as you can generate passive income by using capital that you already have (by buying stocks, or a house, or a business), you can also have passive impact that way. For example, at Nonlinear we set up EA Houses, a project that matches up EAs with spaces where they can live. If you have a spare room (for example), you can volunteer to host an EA. You can have passive impact yourself by housing EAs who are having an active impact through their career. As an EA, you might have already spent lots of time thinking about your active impact: how to do the most good with your career or your donations. This is great, but I think that more EAs should consider their passive impact as well. Will you have the greatest impact through your day-to-day actions? Or can you spend a limited amount of time, effort, and money to create a passively impactful project that will keep on making a difference, changing the world before you even get out of bed? This post was written collaboratively by Kat Woods and Amber Dawn Ace as part of Nonlinear’s experimental Writing Internship program. The ideas are Kat’s; Kat explained them to Amber, and Amber wrote them up. We would like to offer this service to other EAs who want to share their as-yet unwritten ideas or expertise. If you would be interested in working with Amber to write up your ideas, fill out this form. Cross-posted from the EA Forum. Update #1: It’s a rite of passage to binge the top LessWrong posts of all time, and now you can do it on your podcast app. We (Nonlinear) made “top of all time” playlists for LessWrong, the EA Forum, and the Alignment Forum. Each is around ~400 of the most upvoted posts: Update #2: The original Nonlinear Library feed includes top posts from the EA Forum, LessWrong, and the Alignment Forum. Now, by popular demand, you can get forum-specific feeds: Stay tuned for more features. We’ll soon be launching channels by tag, so you can listen to specific subjects, such as longtermism, rationality, animal welfare, or global health. Enter your email here to get notified as we add more channels. Below is the original explanation of The Nonlinear Library and its theory of change. We are excited to announce the launch of The Nonlinear Library, which allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. In the rest of this post, we’ll explain our reasoning for the audio library, why it’s useful, why it’s potentially high impact, its limitations, and our plans. You can read it here or listen to the post in podcast form here. Listen here: Spotify, Google Podcasts, Pocket Casts, Apple, or elsewhere Or, just search for it in your preferred podcasting app. Goal: increase the number of people who read EA research A koan: if your research is high quality, but nobody reads it, does it have an impact? Generally speaking, the theory of change of research is that you investigate an area, come to better conclusions, people read those conclusions, they make better decisions, all ultimately leading to a better world. So the answer is no. Barring some edge cases (1), if nobody reads your research, you usually won’t have any impact. Research → Better conclusion → People learn about conclusion → People make better decisions → The world is better Nonlinear is working on the third step of this pipeline: increasing the number of people engaging with the research. By increasing the total number of EA and rationalist articles read, we’re increasing the impact of all of that content. This is often relatively neglected because researchers typically prefer doing more research instead of promoting their existing output. Some EAs seem to think that if their article was promoted one time, in one location, such as the EA Forum, then surely most of the community saw it and read it. In reality, it is rare that more than a small percentage of the community will read even the top posts. This is an expected-value tragedy, when a researcher puts hundreds of hours of work into an important report which only a handful of people read, dramatically reducing its potential impact. Here are some purely hypothetical numbers just to illustrate this way of thinking:
Another way the audio library is high expected value is that instead of acting as a multiplier on just one researcher or one organization, it acts as a multiplier on nearly the entire output of the EA research community. This allows for two benefits: long-tail capture and the power of large numbers and multipliers. Long-tail capture. The value of research is extremely long tailed, with a small fraction of the research having far more impact than others. Unfortunately, it’s not easy to do highly impactful research or predict in advance which topics will lead to the most traction. If you as a researcher want to do research that dramatically changes the landscape, your odds are low. However, if you increase the impact of most of the EA community’s research output, you also “capture” the impact of the long tails when they occur. Your probability of applying a multiplier to very impactful research is actually quite high. Power of large numbers and multipliers. If you apply a multiplier to a bigger number, you have a proportionately larger impact. This means that even a small increase in the multiplier leads to outsized improvements in output. For example, if a single researcher toiled away to increase their readership by 50%, that would likely have a smaller impact than the Nonlinear Library increasing the readership of the EA Forum by even 1%. This is because 50% times a small number is still very small, whereas 1% times a large number is actually quite large. And there’s reason to believe that the library could have much larger effects on readership, which brings us to our next section. Why it’s useful EA needs more audio content EA has a vibrant online community, and there is an amazing amount of well researched, insightful, and high impact content. Unfortunately, it’s almost entirely in writing and very little is in audio format. There are a handful of great podcasts, such as the 80,000 Hours and FLI podcasts, and some books are available on Audible. However, these episodes come out relatively infrequently and the books even less so. There’s a few other EA-related podcasts, including one for the EA Forum, but a substantial percentage have become dormant, as is far too common for channels because of the considerable amount of effort required to put out episodes. There are a lot of listeners The limited availability of audio is a shame because many people love to listen to content. For example, ever since the 80,000 Hours podcast came out, a common way for people to become more fully engaged in EA is to mainline all of their episodes. Many others got involved through binging the HPMOR audiobook, as Nick Lowry puts it in this meme. We are definitely a community of podcast listeners. Why audio? Often, you can’t read with your eyes but you can with your ears. For example, when you’re working out, commuting, or doing chores. Sometimes it’s just for a change of pace. In addition, some people find listening to be easier than reading. Because it feels easier, they choose to spend time learning that might otherwise be spent on lower value things. Regardless, if you like to listen to EA content, you’ll quickly run out of relevant podcasts - especially if you’re listening at 2-3x speed - and have to either use your own text-to-speech software or listen to topics that are less relevant to your interests. Existing text-to-speech solutions are sub-optimal We’ve experimented extensively with text-to-speech software over the years, and all of the dozens of programs we’ve tried have fairly substantial flaws. In fact, a huge inspiration for this project was our frustration with the existing solutions and thinking that there must be a better way. Here are some of the problems that often occur with these apps:
How The Nonlinear Library fixes these problems To make it as seamless as possible for EAs to use, we decided to release it as a podcast so you can use the podcast app you’re already familiar with. Additionally, podcast players tend to be reasonably well designed and offer great customizability of playlists and speeds. We’re paying for some of the best AI voices because old voices suck. And we spent a bunch of time fixing weird formatting errors and mispronunciations and have a system to fix other recurring ones. If you spot any frequent mispronunciations or bugs, please report them in this form so we can continue improving the service. Initially, as an MVP, we’re just posting each day’s top upvoted articles from the EA Forum, Alignment Forum, and LessWrong. (3) We are planning on increasing the size and quality of the library over time to make it a more thorough and helpful resource. Why not have a human read the content? The Astral Codex Ten podcast and other rationalist podcasts do this. We seriously considered this, but it’s just too time consuming, and there is a lot of written content. Given the value of EA time, both financially and counterfactually, this wasn’t a very appealing solution. We looked into hiring remote workers but that would still have ended up costing at least $30 an episode. This compared to approximately $1 an episode via text-to-speech software. On top of the time costs leading to higher monetary costs, it also makes us able to make a far more complete library. If we did this with humans and we invested a ton of time and management, we might be able to convert seven articles a week. At that rate, we’d never be able to keep up with new posts, let alone include the historical posts that are so valuable. With text-to-speech software, we could have the possibility of keeping up with all new posts and converting the old ones, creating a much more complete repository of EA content. Just imagine being able to listen to over 80% of EA writing you’re interested in compared to less than 1%. Additionally, the automaticity of text-to-speech fits with Nonlinear’s general strategy of looking for interventions that have “passive impact”. Passive impact is the altruistic equivalent of passive income, where you make an upfront investment and then generate income with little to no ongoing maintenance costs. If we used human readers, we’d have a constant ongoing cost of managing them and hiring replacements. With TTS, after setting it up, we can mostly let it run on its own, freeing up our time to do other high impact activities. Finally, and least importantly, there is something delightfully ironic about having an AI talk to you about how to align future AI. On a side note, if for whatever reason you would not like your content in The Nonlinear Library, just fill out this form. We can remove that particular article or add you to a list to never add your content to the library, whichever you prefer. Future Playlists (“Bookshelves”) There are a lot of sub-projects that we are considering doing or are currently working on. Here are some examples:
Who we are We're Nonlinear, a meta longtermist organization focused on reducing existential and suffering risks. More about us. Footnotes (1) Sometimes the researcher is the same person as the person who puts the results into action, such as Charity Entrepreneurship’s model. Sometimes it’s a longer causal chain, where the research improves the conclusions of another researcher, which improves the conclusions of another researcher, and so forth, but eventually it ends in real world actions. Finally, there is often the intrinsic happiness of doing good research felt by the researcher themselves. (2) For those of you who want to use TTS for a wider variety of articles than what the Nonlinear Library will cover, the ones I use are listed below. Do bear in mind they each have at least one of the cons listed above. There are probably also better ones out there as the landscape is constantly changing.
(3) The current upvote thresholds for which articles are converted are: 25 for the EA forum 30 for LessWrong No threshold for the Alignment Forum due to low volume This is based on the frequency of posts, relevance to EA, and quality at certain upvote levels. Cross-posted from the EA Forum.
We’re often asked what you can do to increase your odds of starting a career as a charity entrepreneur. While each person’s answer will be different given their background and traits, here are the three most common things people can do:
1. Show that you can do things on your own Most people’s lives neither encourage nor support self-direction. Typical education models always tell you what to do, where to be, and how well you’re doing. Same goes for the usual job, with a manager who will fire you if you don’t do the things they tell you to do, to a certain standard, by a certain date. You may have some flexibility within that framework, but the scope for action is relatively narrow. Entrepreneurship is entirely different. You are staring at a blank canvas. The only external accountability you have is in the distant future. You might only talk to a donor once a year. And you can’t cram a whole year’s worth of work into a week before you talk to them. It’s not like school where you can get by with cramming if you’re talented enough. You need to do work every day even though nothing bad will happen in the immediate future if you don’t. What’s more, there’s nobody telling you which things need to be done in the first place. You could work on strategy, hiring, management, M&E, or even moving to Hawaii if you felt it was the best call. You have to pick the direction. Most people have little experience with autonomy. When they’re faced with it, they’re filled with immense discomfort at the uncertainty. That’s why so many people postpone thinking about what to do after their education, often by simply getting another degree. The good news is that these are all learnable. You just have atrophied initiative muscles due to disuse. All you have to do is practice. Once you do, the discomfort becomes smaller, and can be replaced by an exhilarating feeling of empowerment and freedom. However, if you’ve never done it before, you may not be good at it. You have to learn how to motivate yourself when nobody else is helping you. You have to learn how to pick a good direction when there’s no existing structure. That’s why we look for people who have experience doing this. It’s more likely that they’ll be able to handle charity entrepreneurship: they’ve done this before and are not jumping into the deep end straight away. Possible actions
2. Learn and practice good decision-making Your success in life is determined by the direction you travel in and how efficiently you get there. However, often people focus on the latter, improving their capacity and productivity, while neglecting the former, thus getting nowhere fast. Making good decisions is a key factor in making sure you’re picking the right way to go. This is crucial for charity entrepreneurship since, as mentioned above, you’ll be facing a blank canvas in terms of what to do. Many people are not very good at decision making, their lives mostly characterized by bumbling around, stumbling upon things that are good enough. When asked why they chose a particular degree or career, they’ll say, “I don’t know. I guess I was good at it and liked it and I was accepted.” Their process was opportunistic rather than deliberate. Fortunately, decision making is not a personality trait but a skill that can be developed. The broad two steps to follow for this are to:
Learning is relatively straightforward. There are many resources on how to think about decisions. We’ve listed some below. Putting them into practice is harder. The biggest trick is remembering to do them in the first place. Unlike with some habits, usually there’s no obvious trigger, since “make a decision” is hardly a concrete thing. Most of the time, making a decision doesn’t feel like a decision. It just feels like life. However, there are a few situations where you can practise and hone your skills. These include choosing a:
Possible actions
3. Get a degree in effective altruism At Nonlinear we look for people who think well about how to maximize their impact using reason and science, which in essence means they are effective altruists. Much of this comes down to good decision-making, but a lot of it is also absorbing the lessons and thoughts that have already been discovered or expressed in the community. There’s no need to reinvent the wheel. While there’s no way to get a degree in EA yet, you can still think of your knowledge level in EA as being akin to your level of education. Many people have an elementary knowledge of EA, having read a single book or watched a couple talks. Others have a PhD, where they’ve read practically everything there is on the subject and are on the cutting edge of a particular topic in the area. We’re often looking for people who have at least a metaphorical undergraduate or masters. You can come in with a high school diploma if you have other qualities and skills that are sufficiently strong (and this goes for all criteria), but it will be a lot easier if you’ve got this qualification. What does this look like? There are three different paths, and you’ll ideally follow all three of them.
Possible actions
In summary, there are many things you can do to increase your odds of starting a career in charity entrepreneurship. Even if you don’t get into dedicated program run by The Nonlinear Fund, your life, skills, and impact will in all likelihood be improved by these actions. Hopefully this helps you, and we look forward to seeing your application! Nonlinear invites those of you who are interested in starting new EA organizations to apply to our EA Hiring Agency Incubation Program. The deadline is February 1st, 2022, 11:59pm EST. You can sign up to our newsletter to be notified of other potential incubation offers in the future. Crossposted from the EA Forum and LessWrong We are excited to announce the launch of The Nonlinear Library, which allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. In the rest of this post, we’ll explain our reasoning for the audio library, why it’s useful, why it’s potentially high impact, its limitations, and our plans. You can read it here or listen to the post in podcast form here. Listen here: Spotify, Google Podcasts, Pocket Casts, Apple Or, just search for it in your preferred podcasting app. Goal: increase the number of people who read EA research A koan: if your research is high quality, but nobody reads it, does it have an impact? Generally speaking, the theory of change of research is that you investigate an area, come to better conclusions, people read those conclusions, they make better decisions, all ultimately leading to a better world. So the answer is no. Barring some edge cases (1), if nobody reads your research, you usually won’t have any impact. Research → Better conclusion → People learn about conclusion → People make better decisions → The world is better Nonlinear is working on the third step of this pipeline: increasing the number of people engaging with the research. By increasing the total number of EA and rationalist articles read, we’re increasing the impact of all of that content.
This is often relatively neglected because researchers typically prefer doing more research instead of promoting their existing output. Some EAs seem to think that if their article was promoted one time, in one location, such as the EA Forum, then surely most of the community saw it and read it. In reality, it is rare that more than a small percentage of the community will read even the top posts. This is an expected-value tragedy, when a researcher puts hundreds of hours of work into an important report which only a handful of people read, dramatically reducing its potential impact. Here are some purely hypothetical numbers just to illustrate this way of thinking: Imagine that you, a researcher, have spent 100 hours producing outstanding research that is relevant to 1,000 out of a total of 10,000 EAs. Each relevant EA who reads your research will generate $1,000 of positive impact. So, if all 1,000 relevant EAs read your research, you will generate $1 million of impact. You post it to the EA Forum, where posts receive 500 views on average. Let’s say, because your report is long, only 20% read the whole thing - that’s 100 readers. So you’ve created 100*1,000 = $100,000 of impact. Since you spent 100 hours and created $100,000 of impact, that’s $1,000 per hour - pretty good! But if you were to spend, say 1 hour, promoting your report - for example, by posting links on EA-related Facebook groups - to generate another 100 readers, that would produce another $100,000 of impact. That’s $100,000 per marginal hour or ~$2,000 per hour taking into account the fixed cost of doing the original research. Likewise, if another 100 EAs were to listen to your report while commuting, that would generate an incremental $100,000 of impact - at virtually no cost, since it’s fully automated. In this illustrative example, you’ve nearly tripled your cost-effectiveness and impact with one extra hour spent sharing your findings and having a public system that turns it into audio for you. Another way the audio library is high expected value is that instead of acting as a multiplier on just one researcher or one organization, it acts as a multiplier on nearly the entire output of the EA research community. This allows for two benefits: long-tail capture and the power of large numbers and multipliers. Long-tail capture. The value of research is extremely long tailed, with a small fraction of the research having far more impact than others. Unfortunately, it’s not easy to do highly impactful research or predict in advance which topics will lead to the most traction. If you as a researcher want to do research that dramatically changes the landscape, your odds are low. However, if you increase the impact of most of the EA community’s research output, you also “capture” the impact of the long tails when they occur. Your probability of applying a multiplier to very impactful research is actually quite high. Power of large numbers and multipliers. If you apply a multiplier to a bigger number, you have a proportionately larger impact. This means that even a small increase in the multiplier leads to outsized improvements in output. For example, if a single researcher toiled away to increase their readership by 50%, that would likely have a smaller impact than the Nonlinear Library increasing the readership of the EA Forum by even 1%. This is because 50% times a small number is still very small, whereas 1% times a large number is actually quite large. And there’s reason to believe that the library could have much larger effects on readership, which brings us to our next section. Why it’s useful EA needs more audio content EA has a vibrant online community, and there is an amazing amount of well researched, insightful, and high impact content. Unfortunately, it’s almost entirely in writing and very little is in audio format. There are a handful of great podcasts, such as the 80,000 Hours and FLI podcasts, and some books are available on Audible. However, these episodes come out relatively infrequently and the books even less so. There’s a few other EA-related podcasts, including one for the EA Forum, but a substantial percentage have become dormant, as is far too common for channels because of the considerable amount of effort required to put out episodes. There are a lot of listeners The limited availability of audio is a shame because many people love to listen to content. For example, ever since the 80,000 Hours podcast came out, a common way for people to become more fully engaged in EA is to mainline all of their episodes. Many others got involved through binging the HPMOR audiobook, as Nick Lowry puts it in this meme. We are definitely a community of podcast listeners. Why audio? Often, you can’t read with your eyes but you can with your ears. For example, when you’re working out, commuting, or doing chores. Sometimes it’s just for a change of pace. In addition, some people find listening to be easier than reading. Because it feels easier, they choose to spend time learning that might otherwise be spent on lower value things. Regardless, if you like to listen to EA content, you’ll quickly run out of relevant podcasts - especially if you’re listening at 2-3x speed - and have to either use your own text-to-speech software or listen to topics that are less relevant to your interests. Existing text-to-speech solutions are sub-optimal We’ve experimented extensively with text-to-speech software over the years, and all of the dozens of programs we’ve tried have fairly substantial flaws. In fact, a huge inspiration for this project was our frustration with the existing solutions and thinking that there must be a better way. Here are some of the problems that often occur with these apps:
How The Nonlinear Library fixes these problems To make it as seamless as possible for EAs to use, we decided to release it as a podcast so you can use the podcast app you’re already familiar with. Additionally, podcast players tend to be reasonably well designed and offer great customizability of playlists and speeds. We’re paying for some of the best AI voices because old voices suck. And we spent a bunch of time fixing weird formatting errors and mispronunciations and have a system to fix other recurring ones. If you spot any frequent mispronunciations or bugs, please report them in this form so we can continue improving the service. Initially, as an MVP, we’re just posting each day’s top upvoted articles from the EA Forum, Alignment Forum, and LessWrong. (3) We are planning on increasing the size and quality of the library over time to make it a more thorough and helpful resource. Why not have a human read the content? The Astral Codex Ten podcast and other rationalist podcasts do this. We seriously considered this, but it’s just too time consuming, and there is a lot of written content. Given the value of EA time, both financially and counterfactually, this wasn’t a very appealing solution. We looked into hiring remote workers but that would still have ended up costing at least $30 an episode. This compared to approximately $1 an episode via text-to-speech software. On top of the time costs leading to higher monetary costs, it also makes us able to make a far more complete library. If we did this with humans and we invested a ton of time and management, we might be able to convert seven articles a week. At that rate, we’d never be able to keep up with new posts, let alone include the historical posts that are so valuable. With text-to-speech software, we could have the possibility of keeping up with all new posts and converting the old ones, creating a much more complete repository of EA content. Just imagine being able to listen to over 80% of EA writing you’re interested in compared to less than 1%. Additionally, the automaticity of text-to-speech fits with Nonlinear’s general strategy of looking for interventions that have “passive impact”. Passive impact is the altruistic equivalent of passive income, where you make an upfront investment and then generate income with little to no ongoing maintenance costs. If we used human readers, we’d have a constant ongoing cost of managing them and hiring replacements. With TTS, after setting it up, we can mostly let it run on its own, freeing up our time to do other high impact activities. Finally, and least importantly, there is something delightfully ironic about having an AI talk to you about how to align future AI. On a side note, if for whatever reason you would not like your content in The Nonlinear Library, just fill out this form. We can remove that particular article or add you to a list to never add your content to the library, whichever you prefer. Future Playlists (“Bookshelves”) There are a lot of sub-projects that we are considering doing or are currently working on. Here are some examples:
Footnotes (1) Sometimes the researcher is the same person as the person who puts the results into action, such as Charity Entrepreneurship’s model. Sometimes it’s a longer causal chain, where the research improves the conclusions of another researcher, which improves the conclusions of another researcher, and so forth, but eventually it ends in real world actions. Finally, there is often the intrinsic happiness of doing good research felt by the researcher themselves. (2) For those of you who want to use TTS for a wider variety of articles than what the Nonlinear Library will cover, the ones I use are listed below. Do bear in mind they each have at least one of the cons listed above. There are probably also better ones out there as the landscape is constantly changing.
25 for the EA forum 30 for LessWrong No threshold for the Alignment Forum due to low volume This is based on the frequency of posts, relevance to EA, and quality at certain upvote levels. Summary
If your main contribution to EA is time, how long should you spend trying to figure out the best thing to do before you switch to taking action? The EA community has spent a lot of time thinking about this question as it relates to money, but money and time differ in important ways. You can save money then give it at a later date; you cannot do the same with time. In this article I will show my current best guess at the answer to the question. Broadly speaking, you should switch from researching to acting once the expected value of marginal research equals the expected value of acting. The goal then is to figure out the values of these two parameters. The value of taking action depends on:
The value of marginal research depends on:
The conclusion I drew from these considerations was to invest heavily in up-front research, then do research at spaced intervals to account for considerations you missed, new ones others have thought of, and the world changing over time. The initial up-front research time can be calculated by putting up the above considerations into a formula based on your best estimates, then figuring out where the marginal value of research dips below enacting the conclusions you came to. Our current best guess suggests we spend two to eight person-years researching. What’s the existing literature on the topic? There is a lot already written on doing good now or doing good later in the EA community, but mostly in regards to giving money. There is also much written about the topic in the wider decision theory community, where it’s commonly referred to as optimal stopping or explore/exploit trade-offs. While there are many interesting ideas in the area, their solutions cannot be straightforwardly applied to EA because they solve problems fundamentally different from those we face. The secretary problem is probably the most famous example of an optimal stopping problem, but it is not a good fit for analyzing EA decisions. Briefly, the thought experiment sets out to figure out how many secretaries to interview before you hire one. Given the conditions of the scenario, there is a mathematical solution - interview 37% of the potential secretaries during the time you can afford to spend interviewing, and after the 37% mark, hire the first secretary who is as good or beats the best found in that exploration phase. The reason that this cannot be applied to our question is that it assumes that you can quickly tell which secretary is better than another, but in EA, problems are very difficult to compare. For example, is deworming or bed nets better? It’s very unclear, and that’s in a relatively well studied area. Comparing animal rights research to preventing AI x-risk is even more fraught with ambiguity. Other limitations of the secretary problem are discussed in the comments of this post here. The multi-armed bandit problem is similarly limited. If you are at a casino with multiple slot machines (sometimes called one-armed bandits), and you don’t know what each machine’s probability of payoff is, which arms do you pull and in what order? There are multiple solutions to this problem and its variants, however its applications to EA are limited. For example, it assumes that when you pull an arm, you instantly know what your rewards are. This is a hard assumption to make in EA. Even if we had perfect knowledge about the results of different actions, it would still be unclear how to value those results. Even if you are an ethical anti-realist, there could be considerations that you hadn’t thought of before that change the size or even the sign of your expected effect (e.g., your stance on speciesism or population ethics). Despite these and other unlisted problems, there are still some useful takeaways from the literature. Those I found most useful were the arm analogy, the Gittins index, and the observation that explore / exploit decisions depends largely on the time you have left. What counts as “pulling an arm” in the multi-armed bandit? In the multi-armed bandit problem, the Gittins index is a solution. It says to pull arms you haven’t pulled before, because they could reveal an arm that will beat your current best. However, if after you’ve pulled the arm a certain number of times and it still hasn’t beat your current best guess, you can move on. So what counts as pulling an arm when it comes to EA? Is it starting a charity or working for an organization? This doesn’t seem right. Even though I have never worked for Homeopaths Without Borders, I can know that utility payoff won’t be good. I already have, in some sense, pulled the arm. This leads to the idea that pulling an arm is analogous to activities that gather information about the option under consideration. Thus the most natural equivalent to pulling an arm is doing a unit of time of learning. The unit of time is relatively arbitrary and can be cut up into very large or small amounts, so I’ll just use a day for simplicity’s sake. Learning can be done through doing the option or researching through other methods. This speaks to the question of how long to give a cause a chance before giving up. For example, if you are not convinced of a cause in the EA community, how long should you keep reading new articles about it, engaging in debates with its supporters and so forth, before you stop and start researching other possible contenders? A concrete example would be, if you know a lot about NTDs, but know very little about international trade reform, it would be more valuable to research the latter more. Explore / exploit decisions depend on the time you have left Another useful heuristic that the optimal stopping literature gave is that how long you spend exploring versus exploiting depends on how long you have left to exploit your best option. This fits with intuition fairly well. For example, if you are on your deathbed, you probably shouldn’t waste any time in trying to make new friends, but rather use your precious last hours with people who you already know and love. However, if you’re in college and have many decades left of your life, you should probably invest a lot of time making friends, because if you find a great new friend, you will be able to enjoy that relationship for decades to come. This applies to charity in the sense of how many working years you have left, although to be more specific, it’s not how many working years you have left, but how many years you’ll be able to capitalize on your research. Value of doing Time left After brainstorming, we came up with 10 factors that affect how many years you should expect to be able to capitalize on your exploration phase. I expect these to vary enormously based on personality, history, choices, environment, etc, so one person’s answers cannot be generalized to other people. Nonetheless, you can definitely apply the same process to yourself as we did and see what the results are. The factors are:
Value Drift Theoretical Approach The most commonly cited concern about how long you will be able to capitalize on your research is value drift. Many things could cause this such as burnout, having children who then take priority, getting distracted, etc. An important thing to keep in mind with value drift are that there are degrees of it. Scaling back your involvement by 10% is not the same as giving up on EA altogether and becoming a surf bum in Hawaii. Burning out so that you need a two week vacation is different from so thoroughly burning out that you never give another hour of your time. The risk of value drift is a very personal one so cannot be generalized easily, but by the same token, it is very easy to be biased towards oneself. People typically have the end of history fallacy in terms of their personality, consistently under-predicting how much they will change in the future. In fact, since I’ve joined the EA movement I’ve seen a substantial percentage of people be very enthusiastic and involved at the beginning, only to completely switch or lose motivation a few months or years later. The movement is still very young, so I suspect an even larger proportion to leave as time goes on. Likelihood of value drift is influenced by:
Empirical Approach Are there any empirical studies that shed light on the issue? Unfortunately, there is little data on the issue. There were some interesting studies on how many people who became social workers stayed in the field, but the literature was inconsistent and the measurement only a rough proxy. For example, if somebody leaves government work to run a nonprofit women’s shelter, does that count as leaving social work? Likewise, what’s the relevant reference class for EAs leaving the movement? Should I put myself in the category of those who donated $20 to AMF then forgot about GiveWell? Or maybe it should be the people who’ve started charities in the area? That seems like reference class tennis to me, and I do not have a current solution to that issue. In the end, the empirical approach did not provide much new information. At the end considering all of these factors, for myself, I put a 15% chance on major value drift, and 45% chance on minor, with both of these most likely to happen earlier on and less likely as time goes on. This had a large effect on my endline predicted working years left, which is to be expected. Pinker Effect The world is getting better, which is fantastic, but could it be bad for our altruistic endeavors? Could we run out of all of the good options? I think the answer is no, unless you’re working exclusively in global health. Let’s start with global poverty, especially global health. It is undeniably getting better and at an incredibly fast rate. It would be unsurprising that 60 years from now, it will be considered strange that anybody went without bed nets or their recommended vaccines. If at the end of your research, you decide that global health is your top cause, you should get started right away, as all of the good opportunities are indeed getting snatched up. However, global health is not the only cause. Some causes are not getting better but worse, such as animal welfare or environmental degradation, so there might be more to be done to help in the future. An additional benefit of the world getting better is that we’re getting better at helping too. There might actually be more effective interventions in the future because people have spent a longer time thinking about and testing different strategies. For example, medieval activists didn’t have the opportunity to provide vaccines because they didn’t exist yet. In the end, the biggest factor for us is that we are becoming wiser and expanding our moral circle. There are likely other causes, such as bug suffering, that could be extremely valuable and neglected, that society will ignore, like factory farming, for decades or centuries to come. So, the good news is that the world will still have problems after your research*, so don’t worry too much about it getting better. You should probably not have this dramatically affect how many working years you have left. *Just as a note, in case it wasn’t clear because tone can be lost in writing, I am 100% joking that it’s good news that there will still be problems when we’re older. It would obviously be great news if there were no more suffering. Potential for a larger team / inspiring others I have the fortune to be on a team of like-minded individuals, such that we can have a high level of coordination. This means that I can focus on research while others focus on doing the action that we currently think is the highest value. The higher alignment this is, the more my effective sphere of influence is. If I can completely rely on one other person, and them on me, we can get twice as much done as a single person. This is true at the very high level of alignment with a small number of people, but could also be true with a large number of people but with less value and epistemic alignment. To illustrate the point, if you do the research and it inspires somebody to start the exact charity that you would have wanted, action is highly delegable, and if your comparative advantage is research, you should continue researching and then propagating your research, advocating for others to act upon it. This is the general strategy of GiveWell when it comes to recommending where others give, and 80,000 Hours in terms of its career research. The question is - how delegable is action? Money is fairly straightforward. A dollar given to AMF by somebody who hates science does the same amount of good as a dollar given by a science geek. However, if a charity is run by one person instead of the other, many different choices will be made that will affect the effectiveness of the charity. For example, the science-disregarding person might hear anecdotes of bed nets being used as fishing nets and switch to a different intervention, whereas the science geek might read the literature and see that while this occasionally happens, it’s swamped by the positive effects. There are some examples in the EA community of researchers inspiring charities, and founders stepping back from their roles as CEOs, which can provide a rough outside view. You can check how valuable the charity continued to be, according to the founder’s values and epistemics, compared to how it was or would have been had they continued to run the charity. Based on the examples I am familiar with, approximately 15% got better after handing off, 45% stayed the same, and 40% got worse. However, this changes if you take into account how much time the founder or researcher invested in the charity at the beginning, ranging from simply writing a blog post about the idea to spending multiple years setting up the organization. With heavy investment, 0% got worse, 85% stayed the same, and 15% got better. There are other factors aside from founder time that affect delegability:
Despite our relatively pessimistic views on delegability, this still represented a huge increase in our number of “effective working years” and thus the value of upfront research. Conclusion on working years remaining To put together all of these considerations, I started off by assuming that I would retire at the normal age, due to factors pulling for and against late retirement. This left me with 40 years. Then I took off or added expected years of work, based on probabilities I put on the different factors. Results will vary based on your personality, choices, and environment. For myself, after putting in hours and hours of work and thought and calculations, I ended up, anti-climatically, with 40.2 years of expected work. This was not what I was expecting, but it was still worth the effort. Initially I had simply thought about value drift and applied a steep discount to my work, but I had not taken into account any positives, or thought about the whole picture. I recommend others trying this exercise as well, because it could affect your decisions. Flow-through effects It has been argued that the effects of doing good now compound. That if you inspire one person to do earning-to-give, then you will continue inspiring new people, and they will too, thus “earning interest on your interest”. I believe this is an oversimplification of the effects. For one, say you start a direct poverty charity and that inspires approximately one new charity per year, and those charities have the same “inspiration rate”. This won’t go on forever until everybody in the world is starting direct poverty charities. It’s not an exponential curve, but rather an s-shaped curve. There are the initial low hanging fruit, exponential growth for a period of time, then a diminishing amount. However, this isn’t the end of the story. After all of these charities start, they don’t last forever. People retire, charities shut down, problems are solved, etc. So really after the tapering, there is a probably relatively linear comedown. Additionally, compounding benefits apply to doing good later as well. It’s not like if you start a charity 10 years from now, nobody will care anymore. However, there is still a penalty for starting later. If you spent 39 years researching, then spent 1 year doing, you’d only have 1 year of inspiring, so only one extra charity started because of you, rather than if you had spent 1 year researching and 39 years doing, at which point you’d have far more charities inspired by you. It is important to note that this model compares acting now, or doing the same action except years later. This means it does not take into account the increased value of your best option that you reap from more research. Furthermore, this is probably an overly optimistic scenario. There are many more ways to deviate from doing good through doing rather than giving. If you are earning to give and you inspire another person to earn to give, if they donate to the same charity or set of charities as you, it’s easy to see how much good they are doing by your standards. Starting a charity or working for another is much more complicated because of the diversity of options. Depending on how pluralistic your values and epistemics are, inspiring others is more or less good. This reasoning is analogous to another consideration, which is that doing builds more capacity and resources than researching. Research of this sort is relatively cheap to run, with just the costs of salaries. However, running a direct charity requires far more employees and direct costs, such that one must build up a larger donor network to run it. For example, to run our research program costs $25,000 USD this year, whereas to run Charity Science Health will cost $250,000 for the first year, and could well reach the multiple million dollar per year mark. On the other hand, if the cause you end up choosing is very different from your initial top option, then many of the donor network you built up for that first charity will not be interested in your next choice. Nonetheless, this ends up not being too large of a consideration, because building up resources likely follows an s-shaped curve. This means that even if you start a few years later than your counterfactual self, you will eventually more or less catch up with them in terms of resources. Learning by Doing There’s a great quote from Brian Tomasik: “There's a quote attributed to Abraham Lincoln (perhaps incorrectly): "Give me six hours to chop down a tree, and I will spend four hours sharpening the axe." This nicely illustrates the idea of front-loading learning, but I would modify the recommendation a little. First try taking a few whacks, to see how sharp the axe is. Get experience with chopping. Identify which parts of the process will be a bottleneck (axe sharpness, your stamina, etc.). Then do some axe sharpening or resting or whatever, come back, and try some more. Keep repeating the process, identifying along which variables you need most improvement, and then refine those. This agile approach avoids waiting to the last minute, only to discover that you've overlooked the most important limiting factor.” This covers a rather important advantage of doing - that you’re not just doing. Learning never stops. How much should we take this into account? I think that this is definitely important because it helps determine which options are realistic and helps calibrate your probabilities. Indeed, historically there have been many things that maybe I could have learned via research, but I probably wouldn’t have without getting my hands dirty. On the other hand, learning by doing is learning by anecdotes. Learning through reading is learning through thousands of anecdotes, otherwise known as science, or even just picking up the individual anecdotes of many others. Additionally, there are some things that you can simply never “learn by doing”, which includes many crucial considerations. For example, you can’t just work for a charity and naturally pick up which is better, frequentism or Bayes, or whether you should be speciesist or not. Those are things that need explicit reasoning and research. Furthermore, learning by doing is very costly per amount learned compared to direct learning. Getting a job or starting a project in an arena is a huge investment which is hard to pull back from once you’ve started. On the other hand, you can risk losing touch with reality if you do not have some hands-on experience. It also lessens the gap between learning via research compared to implementing your top option. Fortunately for me I share an office with a direct implementation organization, so I get the benefits of both worlds, and I have not felt the need to completely address this question. This may be hard to replicate, but some alternatives, like befriending those doing direct work, might confer similar benefits. Value of Research The value of research is, rather straightforwardly, the increased value of your best choice. A great example is that when I started my altruistic career as a child, I saved kelp. My grandmother had told me that kelp were alive. I took this to mean that they were sentient, and then spent many hours in the summer saving kelp from drying out on the beach and having a painful, drawn out death. In retrospect this was adorable, but 0% effective. Given a vast increase in knowledge since then, I have since learned that kelp are not sentient, and given my increased understanding of the world, am helping people at a much larger scale. The value of my best option increased enormously. The key question then is how much does the marginal amount of research increase the value of your best option. This is impossible to answer precisely because we’d need to know the end result, and if we did, we wouldn’t need to do the research. Fortunately we have a way to deal with uncertainty in this domain, which is the expected value of information. Peter Hurford has a great post on this which is generally the method I followed. I just added the concepts of remaining years left to figure out how it compared to doing. Which brings me to the last concept, that you should switch from researching to acting once the expected value of marginal research equals the expected value of acting. The expected value of research will go up for a while, then start going down as you’ve thought of most of the relevant considerations. It will also start going down in life as you have less and less time to be able to capitalize on this knowledge, which will eventually nudge you into action. To calculate the marginal value of one additional year spent researching, you can follow this formula: [(Change in value of best option) x (Percentage of value added by added year researching) x (Working years left - Years spent researching)] - (Value achieved if you researched one less year). Simplified this is simply value of t+1 years of research minus the value of t years of research. Calculate this for each year until the calculation gives a number less than 0, at which point switch to doing. Of note, in this model, I assume that the percentage of value added per year of research is a consistent percentage of the remaining value. This means you get closer and closer to 100%, but never there. So if I expect a value increase of 10 times if I researched forever, and to achieve 50% of the value for each additional year of time, I would expect to get 5x the value the first year, then 50% of the remaining 5, so 50% x 5 = 2.5 the next year, etc. Here’s a worked example: Expected change in value of best option = 5 times better than current option Proportion of remaining potential value for each marginal year of research = 70% Working years total = 40 [70% x 5 times better x (40 years - 1 year researching)] - 40 years at current value if just research= 96.5. This is positive, so try the next year. [((5-3.5) x 70% + 3.5) x (40 years - 2 years researching)] - [70% x 5 times better x (40 years - 1 year researching)] = 172.9-136.5= 36.4. This is positive, so try again. [((5-4.55) x 70% +4.55) x (40 years - 3 years researching)] - [((5-3.5) x 70% + 3.5) x (40 years - 2 years researching)] = 180 - 172.9 = 7.1. This is positive but close to 0, so we’re getting close. [((5-4.865) x 70% +4.865) x (40 years - 4 years researching)] - [((5-4.55) x 70% +4.55) x (40 years - 3 years researching)] = 178.5 - 180 = -1.5. This is negative, but just barely, so it indicates that you should spend a little under 4 years researching before moving on to acting. Of course there are many limitations to this calculation. The three main ones are:
We ran these calculations with a variety of optimistic, pessimistic and best-guess scenarios and all of the results came out in the 2 to 8 person-year range. The next question is what to do with these numbers. Two to eight years is a wide range and the numbers are uncertain thus subject to wide fluctuations based on new information. Our conclusion has been to follow the general process of:
The advantages of this method compared to others considered are that it’s time saving, deadlines making you work faster, and the benefit of seeing things with fresh eyes. It makes sense too because the calculations are only rough approximations, so do not give enough precision to make day-to-day decisions in any case. Spaced Research Throughout Rest of Life This is half the puzzle. You cannot simply research once and then call it a day. The world changes and there will be new considerations. Thus part of the solution is to do spaced out research phases throughout the rest of your life. So, how should they be spaced out? We’ve decided to postpone on that decision until after the initial phase of research, but here are some contenders we thought of:
Remaining Questions These considerations are currently incomplete. Some of the weaknesses we plan on investigating further as separate crucial considerations, some we might come back to if we think optimal stopping has been stopped at the optimal time or not. These gaps include:
I have a tool for thinking that I call “steelman solitaire”. I have found that it comes to much better conclusions than doing “free-style” thinking, so I thought I should share it with more people. In summary, it consists of arguing with yourself in the program Workflowy, alternating between writing a steelman of an argument, a steelman of a counter-argument, a steelman of a counter-counter-argument, etc. (I will explain steelmanning later in the post; in brief, it is the opposite of a strawman argument, in that steelmanning presents the strongest possible version of an opposing view.) In this blog post I’ll first explain the broad steps, then list the benefits, and finally, go into more depth on how to do it.
BENEFITS
THE BROAD IDEA Strawmanning means presenting the opposing view in the least charitable light – often so uncharitably that it does not resemble the view that the other side actually holds. The term of steelmanning was invented as a counter to this; it means taking the opposing view and trying to present it in its strongest form. This has sometimes been criticized because often the alternative belief proposed by a steelman also isn’t what the other people actually believe. For example, there’s a steelman argument that states that the reason organic food is good is because monopolies are generally bad and Monsanto having a monopoly on food could lead to disastrous consequences. This might indeed be a belief held by some people who are pro-organic, but a huge percentage of people are just falling prey to the naturalistic fallacy. While steelmanning may not be perfect for understanding people’s true reasons for believing propositions, it is very good for coming to more accurate beliefs yourself. If the reason you believe you don’t have to care about buying organic is because you believe that people only buy organic because of the naturalistic fallacy, you might be missing out on the fact that there’s a good reason for you to buy organic because you think monopolies on food are dangerous. However – and this is where steelmanning back and forth comes in – what if buying organic doesn’t necessarily lead to breaking the monopoly? Maybe upon further investigation, Monsanto doesn’t have a monopoly. Or maybe multiple organizations have copyrighted different gene edits, so there’s no true monopoly. The idea behind steelman solitaire is to not stop at steelmanning the opposing view. It’s to steelman the counter-counter-argument as well. As has been said by more eloquent people than myself, you can’t consider an argument and counter-argument and consider yourself a virtuous rationalist. There are very long chains of counter^x arguments, and you want to consider the steelman of each of them. Don’t pick any side in advance. Just commit to trying to find the true answer. This is all well and good in principle but can be challenging to keep organized. This is where Workflowy comes in. Workflowy allows you to have counter-arguments nested under arguments, counter-counter-arguments nested under counter-arguments, and so forth. That way you can zoom in and out and focus on one particular line of reasoning, realize you’ve gone so deep you’ve lost the forest for the trees, zoom out, and realize what triggered the consideration in the first place. It also allows you to quickly look at the main arguments for and against. Here’s a worked example for a question. TIPS AND TRICKS That’s the broad-strokes explanation of the method. Below, I’ll list a few pointers that I follow, though please do experiment and tweak. This is by no means a final product.
CONCLUSION In summary, steelman solitaire means steelmanning arguments back and forth repeatedly. It helps with:
Acknowledgements. I’d like to thank Spencer Greenberg for both inspiring the original idea with Clearer Thinking’s Belief Challenger tool and for coming up with a much better name for the concept than my original “steelmanning back and forth”. |