Better has an ambitious mission: to research, develop, and share the best ways people can improve their lives and the world. There are countless ways to potentially improve life; over millennia, humanity has tried everything from protective magic to ward off evil spirits to caloric restriction to extend lifespan. Unfortunately, the majority of these ways don’t work. Of the ones that do, only a small fraction are the absolute best ways to try. There has never been a dedicated effort to identify the absolute best ways to improve life—until now, with Better.
Given the enormous breadth and depth of information that exists about life, and the challenge of separating what’s best from what works and what works from what doesn’t, Better has begun developing a research framework that will allow us—and humanity in general—to find the best ways to live life as efficiently as possible.
The EPITOME Framework
We are calling our initial framework the EPITOME framework. It has five steps:
- Exploration: There are many ways to potentially improve life. Ideally, we would find as many of them as possible to make sure we don’t miss low-hanging fruit or ways that are better than other ways that we’ve found. Practically, we want to try to find the best ways really quickly and ignore the others. There are two types of exploration: active and passive. With active exploration, we will try to quickly find the best ways to improve a specific aspect of life, relying on precise research scoping, efficient ways to scan broad amounts of information, trusted information sources, and high-quality evidence. With passive exploration, we will monitor high-quality information sources and leverage crowdsourcing to discover new information and ways that could affect our existing recommendations.
- Prioritization: Some ways to improve life are much better than other ways. Once we find out about a new way, we need to be able to quickly estimate how promising it is so that we can determine if/when to invest more resources in researching it. We have created the ABCD prioritization framework to estimate how impactful a particular way to improve life is. We multiply the number of people it could affect (audience) with the cost-and-risk-adjusted impact per person (benefit), the likelihood of it working (confidence), and percentage of people that would adopt it (difficulty).
- InvesTigation: If a way seems very promising, we need to determine if it will work or not. Our investigation process involves following best practices with reasoning (including epistemic humility and reasoning from first principles), employing active exploration to gather pertinent information (including confirming and disconfirming evidence), and creating formal and information reasoning aids to clarify reasoning and minimize bias (like argument maps and double crux) in order to reach the best conclusions.
- Optimization: If a way works, we need to determine if it is the best way to improve a specific aspect of life. We will first compare the way on a broader level with other ways and determine which high-level way is best. Ideally, we would have analyzed other high-level ways in the past, and if not, research into other high-level ways is required. If the high-level way looks promising, we will examine the details of how someone can best implement that way, which can include identifying the best practical steps to take or things to buy to best implement the way.
- MEdiatization: Mediatization is how media affects important areas of society like politics and business. We will produce media (writing, videos, etc.) that recommends the best ways to improve life, and we will optimize to have the highest impact possible. Oftentimes, this will involve explicitly highlighting how a way can influence people, organizations, and society as a whole.
The EPITOME framework relies on many mental models and heuristics (some opinionated, some not) for how the world works, how to do research, and how to reason. We’ve documented these models and heuristics across seven steps, in rough order of when they are used in EPITOME.
Step 0.1: How The World Works
Better exists because we believe that there are many things that are imperfect in the world, and many of these imperfections are entirely optional and inflicted by individuals and society upon themselves. Individuals and society do this because they are either less than perfectly moral, or less than perfectly rational and strategic. Our research method will focus on the latter problem; we believe that with better knowledge and reasoning, we can make things better.
Eliezer Yudkowsky’s book Inadequate Equilibria, which is available for free online, covers why we do not live in an “adequate civilization,” and instead in one where imperfections on an individual and societal level exist at a vast scale and produce unimaginable levels of harm. In Chapter 1, on an individual level, Eliezer describes how he treated his wife’s depression with a simple strategy that, tragically, could likely improve the lives of millions of other depressed people but is completely ignored by everyone. On a societal level, he describes how the Bank of Japan, a reputable institution, needlessly inflicted trillions of dollars of damage on its own economy, and how such a thing is even possible. Understanding humanity’s civilizational inadequacy is critical in understanding why Better even exists; ideally, there would not be imperfections so large we can find easy ways to greatly improve the lives of millions of people, but sadly we can.
The Open Philanthropy Project’s article on Reasoning Transparency covers why the vast majority of writing (and information) does not easily and clearly make their reasoning easy to replicate. This is unfortunate because it forces us to either trust something, or spend large amounts of time trying to recreate an information source’s reasoning. We will use this understanding of reasoning transparency to gauge higher quality content from lower quality content, as well as shape Better’s internal and public writing and information.
Step 0.2: Reasoning
In addition to believing that better knowledge and reasoning can improve the world, a perspective supported by Inadequate Equilibria, here at Better, we also adopt a stance of epistemic humility. Epistemic humility involves recognizing that our own knowledge and reasoning are limited, and that there is a possibility that other views are correct. We aim to almost never hold a belief of view with “100%” confidence. Given the number of people and beliefs on the planet, statistically, it is nearly impossible for people to be correct with all of the things they believe are correct, including Better. It is also almost certain that future generations will view many of our current beliefs in a completely different light, as evidenced from our views on the beliefs of past generations. Regarding confidence estimates, as documented in the book Superforecasting, people are generally very poor at accurately predicting things and accurately conveying their own level of confidence. We want to use the recommendations in Superforecasting to learn how to better convey our own levels of confidence in our recommendations and the supporting arguments, evidence, and predictions in those recommendations.
First principles reasoning is an important way to improve how we reason, as captured by Wait But Why in their article on how Elon Musk has been so successful. Farnam Street also has an article advocating for it and explaining how to do it. This is important in Better’s research. For example, we can use first principles reasoning to understand how common and less common ways to live life work and what effects they have on us. We can then use this understanding to determine which way is likely to be best, and why. As another example, first principles reasoning can help us understand how the world and life works, and then think of novel ways to improve it.
Cognitive biases are the many ways our brain does not think in rational ways. The Decision Lab has an excellent list of common biases. Biases can range from consistently underestimating how hard things will be (which is why projects always seem to be going over time and over budget) to using whether something easily comes to mind to estimate things (for example, overestimating the risk of planes due to media coverage of plane crashes, and underestimating the risk of cars). Here at Better, we want to follow practices that minimize bias. With Better Research, we want to be cautious of confirmation bias, which can cause us to only search for evidence that confirms our existing views.
Step 0.3: Research Sources and Tools
Hierarchy of Information Sources
Information sources are of utmost importance when doing research. High-quality information sources employ good reasoning and evidence, and will allow us to quickly arrive at the right conclusions. Lower-quality information sources can cause us to miss out on key information and strategies, and as a result produce suboptimal recommendations.
We categorize information sources from the highest level to the lowest level.
Clearly, websites, academic studies, and expert interviews are very different from one another. All have their pros and cons. We will generally use multiple types of sources, in particular, websites, academic research, books, and people in our network. We will most commonly use online research due to its ease of access, speed, and breadth. Unfortunately there is a lot of low-quality content on the internet, and we will need to use various strategies to overcome this problem. The following categories will use websites as the source type.
General-Purpose Information Sources
General-purpose information sources can be used in isolation or with several related sources to find information on a vast array of topics. The highest-level general information source for the web would be a top-level domain, like .edu, which can only be used by accredited institutes of higher education for the most part.
This category also includes common websites like Reddit and Wikipedia, which we commonly use at Better (we can use websites in many ways, and not necessarily for evidence; for example, Wikipedia can be a good gauge of the neutral/consensus view on a topic).
Individual websites and small collections of websites in a certain area also fall under this classification. For example, we will sometimes search thoughtful publications such as Aeon, Nautilus, and BBC Future to get well-researched, thoughtful takes on issues, and websites that have book summaries to information that may only be in books.
Broad Information Sources
We use broad information sources to reliably get information about a specific area of knowledge. For example, the Cochrane Collaboration is a great way to understand the consensus academic view on medical issues, and Consumer Reports is a great way to get information on the best products in a particular area.
Better is particularly interested in organized information sources for a specific area, which we call a database. An example of a database would be Greater Good in Action, an organized resource on positive psychology.
Specific Information Sources
Specific information sources contain information about a very specific topic. This is generally a standalone article or webpage. Here, the reliability of the publisher matters, but so does the actual content in the source itself.
Evaluating Evidence/Source Quality
For a specific piece of information from a source, we consider factors such as:
- Relevance - Does this source pertain to what we are researching? When was it published, and is the information still accurate?
- Author - Is the author knowledgeable and trustworthy? Do they have any biases or conflicts of interest (why was the source published)? How good of a thinker are they in general, and how much do they know about this specific topic?
- Language - How well does the author speak/write? Is this an intellectual source? How long is it? How good is the author’s reasoning, reasoning transparency, and intellectual honesty?
- Evidence - Does the author cite high-quality sources? How many? How high on the evidence hierarchy is the evidence presented? Is the evidence accurately represented?
- Quality - What is the production value of the piece of information? Is it high budget? Well designed? Using good technology? Easy to navigate to find relevant information?
At a broader level, we consider factors such as:
- Specific Information Context - How do other sources perceive the source? How about the internal commentary (comments directly on an article) and external commentary (comments on third-party platforms like Reddit and Twitter)?
- Broad Information Context - Where is the article positioned in the overall map/spectrum of belief and consensus? Does the article present multiple points of view, or just one?
- Meta - How do these factors apply to other pieces of information from the same source, and the source itself (for example, the production value of the entire website, not just a specific article)? What about evidence and sources cited by the source?
Each factor can be scored on a numeric scale, for example 1–5, and combined together to create a holistic score for the source.
When evaluating academic research, LessWrong has a great article introducing the why and how of literature reviews.
We recommend learning about and using advanced Google search techniques. This series of articles covers some common techniques we use, including site limited searches (air quality site:bbc.com/future) and exact searches (”delegative” “DAO”).
It is very helpful to store your research progress in a knowledge management tool to synthesize information and for future reference. This could be done with a research-specific tool, like Zotero, or in a general-purpose tool, like Notion (which we use at Better).
Step 1: Research Scoping
It is important to understand the exact research question and scope of the research in order to avoid hours of wasted time searching for irrelevant information. It can help to create a research plan before doing the research, as well as break the broad research question down into parts.
Step 2: Breadth Research Methods
When researching a new/unfamiliar space, it’s important to get a sense of the entire landscape in an efficient manner. If this isn’t done, it could be possible to miss entire areas of knowledge that could be pivotal in affecting research outcomes. This process also helps with understanding and prioritizing the specific items that need to be researched, as well as identifying quality sources of information.
There are several strategies that can help with rapid learning:
- Scan general-purpose information sources to quickly get a sense of the entire space. For example, Wikipedia, .edu sites, and Reddit.
- Identify high-quality broad information sources and/or use existing sources if available. For example, if you are researching a health topic, it would be prudent to check sources like the Cochrane Collaboration, which does unbiased systematic reviews of health-related topics.
- Scan the topics covered by organized collections of information in the area of research, such as books, online courses, and online database. You can also read reviews and summaries to go slightly more in-depth.
- Deliberately search for different perspectives.
Knowledge gaps are missing areas/bits of information that can greatly affect research outcomes. For example, Better often searches for the best products and services to meet certain needs. If we are unable to find some that actually exist, we may provide the wrong recommendation, or even mistakenly conclude that no relevant options exist. This can have significant consequences on our public recommendations and internal decisions.
In order to avoid this, you can utilize several strategies to find relevant information:
- When using online searches, try searching many times in a row with different keywords, both changing them incrementally or dramatically depending on what results come up
- Use keywords and phrases from relevant content that you find to inspire additional searches; this can involve diving deeply into what you are researching and finding specialized terminology that can assist the search
- Try using exact word and phrase searches, which involves using quotation marks around keywords with Google Search
- Use the triangulation method to find information, which includes a variety of strategies
- Try different informational sources to find information (for example, try Google Search, Google Books, Google Scholar, and if time allows, also interview experts, etc.)
- If resources permit, have different people try to find the same information separately
- Utilize different data sources and methods
- Specifically seek out related/adjacent sources of knowledge (for example, research other infectious diseases when researching COVID-19) and attempts to do or research something similar (for example, if designing a process or framework, look for similar processes and frameworks)
Step 3: Depth Research Methods
Similarly to breadth research, referring to high-quality general-purpose and broad information sources is rapid way to conduct depth research. Depth research places a particularly strong emphasis on using previously trusted sources that are of a particularly high caliber since the goal is deep understanding rather than gaining an awareness of all of the knowledge and arguments that exist in a certain space.
Often, previously trusted sources either don’t exist are insufficient to answer the research question at hand. In this case, it’s necessary to assess the quality of sources in order to determine how much they can be trusted and whether they can be used as vetted sources for future research.
Many such techniques are mentioned in the section on evaluating sources. We’re particularly interested in:
- The quality and sophistication of writing/speech
- The author’s display of good thinking and motivations, including synthesizing information and perspectives from multiple viewpoints, employing quantitative reasoning, and exhibiting reasoning transparency and intellectual honesty
- The author’s use of high-quality information sources themselves, and the perspectives of high-quality information sources on the author’s work
It’s important to note that sources can be put to good use even if they don’t meet all of our criteria. For example, if a source is biased, that can be a way to identify all possible arguments and evidence against a certain point. If the biased sources have weak evidence and arguments, that can be used to infer that the opposing position is quite strong. This method can also be used to double check the reasoning for one side or the other once an initial conclusion has been drawn.
Investigating and interpreting primary sources is the most reliable way to go about research, but this is exceptionally time consuming. If trusted secondary sources have already examined primary sources, those secondary sources can be used to significantly speed up the research process. It is important to be able to understand the reasoning and evidence when utilizing secondary information sources, and it is also possible to randomly or methodologically conduct audits of their interpretation of primary sources.
Step 4: Reasoning Process
Before, during, and after information has been obtained, regardless of whether that information is vetted, it’s important to independently reason through key arguments and evidence underlying those arguments, and to use that process to shape further research and evidence gathering.
As an example for using reasoning to drive information gathering, if reasoning from first principles, it’s necessary to gather information about how something fundamentally works (like how Elon Musk double-checked the raw materials cost of rockets before determining it would be feasible to start a rocket company).
Our overall reasoning process should involve independently generating supporting and opposing arguments along with evidence to back arguments up. It can be incredibly helpful to employ reasoning aids if time allows to help with this process.
In order from least to most rigorous, a researcher can use:
- Purely mental processing of information and reasoning
- Quick, informal notes
- Formal notes, like listing out arguments and evidence
- Argument maps, which involve visually organizing arguments and evidence, and can be created with software like Mindmup
We strongly support the use of argument maps given evidence of their considerably efficacy, as well as future techniques and software aids that will further enhance individual and collective reasoning.
While doing research and reasoning, particularly with topics that aren’t very clear cut (like solved problems in the hard sciences), it’s important to actively fight against cognitive biases that can impair the research process. One of the most significant biases that can impair reaching accurate conclusions is confirmation bias: the tendency to search for information that confirms existing beliefs. In order to avoid confirmation bias, we need to deliberately identify disconfirming sources and seek out disconfirming arguments and evidence. This is closely related with falsification.
To assist with this type of reasoning, we recommend pre-mortems to identify potentially unseen issues and problems with reasoning, and identifying argument cruxes to direct research attention to the most critical components of an argument.
As a final note, fermi estimates and other forms of quantitative reasoning can be helpful in reasoning about issues, but will not be covered extensively.
At first, we will publish our recommendations in our initial recommendation template. This template adapts to whichever phase of EPITOME a potential recommendation is in.
- Only the title is required so that recommendations can be added with minimal friction. Structure the title like this [Title Name] - [Contributor Name]. Organize the page in the appropriate place in the hierarchy for Better’s areas of life.
- Consider adding quick notes to the page regarding the recommendation and its potential importance. Include relevant sources.
- Set the page to be internally visible only, since we want to publish pages that are better developed. [Consider publishing this with a minimal body and tallying viewer interest and engagement? Or maybe save tallying for prioritization instead?]
- Add the recommendation template to the page body if it hasn’t been added already.
- Fill in the ABCD score with initial estimates for the audience, benefits, certainty, and difficulty, as well as the impact score.
- In order to publish the template, the Research, Summary, and Recommendation sections must be filled out. Include sources and key takeaways from those sources in the Learn More section. Do not feature recommendations at this stage of research.
- All page sections must be filled out, including all recommendations (individual, organizational, societal).
- The ABCD and impact scores should be based on solid reasoning and evidence.
- The page must pass internal testing and all required changes must be made before the page can officially be updated with this research phase.
- Update the page with optimization-specific information for relative efficacy and specific implementation recommendations. Additional research may not be required if the space has already been researched.
- Insert analytics and monetization links if applicable.
- This page must pass external testing before the page can be officially updated with this research phase and become eligible to be featured.
- The page is now ready for widespread release and for use in shaping society!
- Track page performance especially in the first few days after release, and make changes to the page as needed.
- After publication, update certain parts of the page if applicable, such as with updated product links, or incorporating performance metrics into Audience and Difficulty scores.
One of the primary advantages of Better’s life recommendations is that they include both the benefits and the costs of each recommendation, and compare each recommendation against all of the other possible recommendations to identify the truly best things someone can do immediately to improve their life.
Overall Impact (Universal Conversion Method)
One of the most significant challenges when it comes to comparing life recommendations is how to deal with different units of impact—how do you compare something like saving for retirement with exercise? Various mechanisms have been proposed for this. Recent attempts have proposed converting all single metric into a single metric; for instance, WELLBYs (Well-Being Adjusted Life Years, where life years are generally adjusted based on self-reported life satisfaction).
In our opinion, while converting everything into a single metric is an excellent way to quantify the emotional benefits of certain recommendations, this approach has substantial drawbacks. For now, we will ignore the drawback of the substantial challenges surrounding quantifying happiness and how various things affect it. Even if that was solved, our current impression is that this method is too removed from what people actually want as well and everyday decision making, both of which typically go beyond happiness. For example, it may be the case that something like becoming a parent decreases or has no impact on happiness, but people still want to become parents. It may be the case that earning $100/hour has a very tiny positive effect on happiness compared to doing a fulfilling activity, but people would likely choose the $100.
Instead of converting everything to happiness, we convert everything to everything else without a central factor. Any new metric can be introduced into the conversion if there is a sensible conversion factor(s) to an existing factor. For instance, let’s say we value an hour of time at $20. If we need to assess how much value to assign an increase in lifetime net worth, rather than converting the net worth increase directly to happiness, we would convert the net worth to the annual earnings required to obtain that net worth, which could then be directly converted to time at $20/hour and any other factor involved in the conversion. We utilize a neutral unit that is not representative of anything and increases at a fixed linear rate. This neutral unit is pegged to everything else, and is an “impact score” which any factor can be converted to and out of into any other factor.
We use this impact score to gauge the net impact of a specific recommendation by quantifying the benefits and costs, converting them to the impact score if the factors are different, and then subtracting the costs from the benefits to get the net benefit. This number becomes the final impact score of a recommendation, which can then be compared to any other recommendation.
We believe this quantification method is more reflective of real-world tradeoffs and decision making compared to directly converting everything into a single metric.
|Impact Score||Financial - Lifetime Net Worth||Financial - Lifetime Earnings (After Tax)||Time - Hours Saved|
Prioritization (ABCD Framework)
After calculating the impact score, we consider other factors to determine how impactful a recommendation is, which affects internal research prioritization as well as the likelihood for individuals to adopt a recommendation.
Our ABCD prioritization framework involves multiplying the Audience (potential audience of the population) with the Benefits (net benefits, adjusted for costs), Certainty (strength of confidence in the recommendation works), and Difficulty (difficulty in encouraging behavior change). The resulting number represents the estimated net impact of publishing a recommendation.
ABCD values are estimated as follows:
- Audience: Base the audience on the fraction of the US population that has not already adopted the recommendation and for which the recommendation applies
- Benefits: Quantify the benefits, costs, and risks of a recommendation, convert all factors to impact scores, then subtract the costs and risks from the benefits to arrive at the net impact score
- Certainty: Quantify the following from 0% to 100% and multiply them to get the certainty adjustment: Consensus (our perception of the state of consensus around the efficacy/advisability of pursuing an intervention) and Confidence (our confidence in the benefits being realized, adjusted for the degree and likelihood of negative variance)
- Difficulty: The estimated fraction of the audience that will follow the recommendation after hearing about it (can be roughly estimated by gauging the variance in reactions to a similar or identical recommendation provided in randomly sampled online discussions)
Right now, the conversion and prioritization options are fully fixed and based on average US statistics. We are in the process of developing a knowledge management app which will enable fully customizable recommendations per user. Users will be able to provide the app with background information like age and their personal monetary value of time, and have the app generate personalized recommendations depending on what someone values most.