Machine Intelligence

  • March 2019 Newsletter
    Want to be in the reference class “people who solve the AI alignment problem”? We now have a guide on how to get started, based on our experience of what tends to make research groups successful. (Also on the AI Alignment Forum.) Other updates Demski and Garrabrant’s introduction to MIRI’s agent foundations research, “Embedded Agency,” is now available (in lightly edited form) as an arXiv paper. New research posts: How Does Gradient Descent Interact with Goodhart?; “Normative Assumptions” Need Not Be Complex; How the MtG Color Wheel Explains AI Safety; Pavlov Generalizes Several MIRIx groups are expanding and are looking for new members to join. Our summer fellows program is accepting applications through March 31. LessWrong’s web edition of Rationality: From AI to Zombies at lesswrong.com/rationality is now fully updated to reflect the print edition of Map and Territory and How to Actually Change Your Mind, the first two books. (Announcement here.) News and links OpenAI’s GPT-2 model shows meaningful progress on a wide variety of language tasks. OpenAI adds: Due to concerns about large language models being used to generate deceptive, biased, or abusive language at scale, we are only releasing a much smaller version of GPT-2 along with sampling code. We are not releasing the dataset, training code, or GPT-2 model weights. […] We believe our release strategy limits the initial set of organizations who may choose to [open source our results], and gives the AI community more time to have a discussion about the implications of such systems. The Verge discusses OpenAI’s language model concerns along with MIRI’s disclosure policies for our own research. See other discussion by Jeremy Howard, John Seymour, and Ryan Lowe. AI Impacts summarizes evidence on good forecasting practices from the Good Judgment Project. Recent AI alignment ideas and discussion: Carey on quantilization; Filan on impact regularization methods; Saunders’ HCH Is Not Just Mechanical Turk and RL in the Iterated Amplification Framework; Dai on philosophical difficulty (1, 2); Hubinger on ascription universality; and Everitt’s Understanding Agent Incentives with Causal Influence Diagrams. The Open Philanthropy Project announces their largest grant to date: $55 million to launch the Center for Security and Emerging Technology, a Washington, D.C. think tank with an early focus on “the intersection of security and artificial intelligence”. See also CSET’s many jobpostings. The post March 2019 Newsletter appeared first on Machine Intelligence Research Institute. Read more »
  • Applications are open for the MIRI Summer Fellows Program!
    CFAR and MIRI are running our fifth annual MIRI Summer Fellows Program (MSFP) in the San Francisco Bay Area from August 9 to August 24, 2019. Applications are available here and are due by March 31, 2019 (23:59 PDT). MSFP is an extended retreat for mathematicians and programmers with a serious interest in making technical progress on the problem of AI alignment. It includes an overview of CFAR’s applied rationality content, a breadth-first grounding in the MIRI perspective on AI safety, and multiple days of actual hands-on research with participants and MIRI staff attempting to make inroads on open questions. Program Description The intent of the program is to boost participants, as far as possible, in four overlapping areas: Doing rationality inside a human brain: understanding, with as much fidelity as possible, what phenomena and processes drive and influence human thinking and reasoning, so that we can account for our own biases and blindspots, better recruit and use the various functions of our brains, and, in general, be less likely to trick ourselves, gloss over our confusions, or fail to act in alignment with our endorsed values. Epistemic rationality, especially the subset of skills around deconfusion. Building the skill of noticing where the dots don’t actually connect; answering the question “why do we think we know what we think we know?”, particularly when it comes to predictions and assertions around the future development of artificial intelligence. Grounding in the current research landscape surrounding AI: being aware of the primary disagreements among leaders in the field, and the arguments for various perspectives and claims. Understanding the current open questions, and why different ones seem more pressing or real under different assumptions. Being able to follow the reasoning behind various alignment schemes/theories/proposed interventions, and being able to evaluate those interventions with careful reasoning and mature (or at least more-mature-than-before) intuitions. Generative research skill: the ability to make real and relevant progress on questions related to the field of AI alignment without losing track of one’s own metacognition. The parallel processes of using one’s mental tools, critiquing and improving one’s mental tools, and making one’s own progress or deconfusion available to others through talks, papers, and models. Anything and everything involved in being the sort of thinker who can locate a good question, sniff out promising threads, and collaborate effectively with others and with the broader research ecosystem. Food and lodging are provided free of charge at CFAR’s workshop venue in Bodega Bay, California. Participants must be able to remain onsite, largely undistracted for the duration of the program (e.g. no major appointments in other cities, no large looming academic or professional deadlines just after the program). Applications are available here and are due by March 31, 2019 (23:59 PDT). Finalists will be contacted by a MIRI staff member for 1–2 Skype interviews sometime between April 1 and April 21. Admissions decisions — yes, no, waitlist — will go out no later than April 30th. If you have any questions or comments, please send an email to colm@intelligence.org, or, if you suspect others would also benefit from hearing the answer, post them here. The post Applications are open for the MIRI Summer Fellows Program! appeared first on Machine Intelligence Research Institute. Read more »
  • A new field guide for MIRIx
    We’ve just released a field guide for MIRIx groups, and for other people who want to get involved in AI alignment research. MIRIx is a program where MIRI helps cover basic expenses for outside groups that want to work on open problems in AI safety. You can start your own group or find information on existing meet-ups at intelligence.org/mirix. Several MIRIx groups have recently been ramping up their activity, including: UC Irvine: Daniel Hermann is starting a MIRIx group in Irvine, California. Contact him if you’d like to be involved. Seattle: MIRIxSeattle is a small group that’s in the process of restarting and increasing its activities. Contact Pasha Kamyshev if you’re interested. Vancouver: Andrew McKnight and Evan Gaensbauer are looking for more people who’d like to join MIRIxVancouver events. The new alignment field guide is intended to provide tips and background models to MIRIx groups, based on our experience of what tends to make a research group succeed or fail. The guide begins: Preamble I: Decision Theory Hello! You may notice that you are reading a document. This fact comes with certain implications. For instance, why are you reading this? Will you finish it? What decisions will you come to as a result? What will you do next? Notice that, whatever you end up doing, it’s likely that there are dozens or even hundreds of other people, quite similar to you and in quite similar positions, who will follow reasoning which strongly resembles yours, and make choices which correspondingly match. Given that, it’s our recommendation that you make your next few decisions by asking the question “What policy, if followed by all agents similar to me, would result in the most good, and what does that policy suggest in my particular case?” It’s less of a question of trying to decide for all agents sufficiently-similar-to-you (which might cause you to make the wrong choice out of guilt or pressure) and more something like “if I were in charge of all agents in my reference class, how would I treat instances of that class with my specific characteristics?” If that kind of thinking leads you to read further, great. If it leads you to set up a MIRIx chapter, even better. In the meantime, we will proceed as if the only people reading this document are those who justifiably expect to find it reasonably useful. Preamble II: Surface Area Imagine that you have been tasked with moving a cube of solid iron that is one meter on a side. Given that such a cube weighs ~16000 pounds, and that an average human can lift ~100 pounds, a naïve estimation tells you that you can solve this problem with ~150 willing friends. But of course, a meter cube can fit at most something like 10 people around it. It doesn’t matter if you have the theoretical power to move the cube if you can’t bring that power to bear in an effective manner. The problem is constrained by its surface area. MIRIx chapters are one of the best ways to increase the surface area of people thinking about and working on the technical problem of AI alignment. And just as it would be a bad idea to decree “the 10 people who happen to currently be closest to the metal cube are the only ones allowed to think about how to think about this problem”, we don’t want MIRI to become the bottleneck or authority on what kinds of thinking can and should be done in the realm of embedded agency and other relevant fields of research. The hope is that you and others like you will help actually solve the problem, not just follow directions or read what’s already been written. This document is designed to support people who are interested in doing real groundbreaking research themselves. (Read more)   The post A new field guide for MIRIx appeared first on Machine Intelligence Research Institute. Read more »
  • February 2019 Newsletter
    Updates Ramana Kumar and Scott Garrabrant argue that the AGI safety community should begin prioritizing “approaches that work well in the absence of human models”:  [T]o the extent that human modelling is a good idea, it is important to do it very well; to the extent that it is a bad idea, it is best to not do it at all. Thus, whether or not to do human modelling at all is a configuration bit that should probably be set early when conceiving of an approach to building safe AGI. New research forum posts: Conditional Oracle EDT Equilibria in Games; Non-Consequentialist Cooperation?; When is CDT Dutch-Bookable?; CDT=EDT=UDT The MIRI Summer Fellows Program is accepting applications through the end of March! MSFP is a free two-week August retreat co-run by MIRI and CFAR, intended to bring people up to speed on problems related to embedded agency and AI alignment, train research-relevant skills and habits, and investigate open problems in the field. MIRI’s Head of Growth, Colm Ó Riain, reviews how our 2018 fundraiser went. From Eliezer Yudkowsky: “Along with adversarial resistance and transparency, what I’d term ‘conservatism’, or trying to keep everything as interpolation rather than extrapolation, is one of the few areas modern ML can explore that I see as having potential to carry over directly to serious AGI safety.” News and links Eric Drexler has released his book-length AI safety proposal: Reframing Superintelligence: Comprehensive AI Services as General Intelligence. See discussion by Peter McCluskey, Richard Ngo, and Rohin Shah. Other recent AI alignment posts include Andreas Stuhlmüller’s Factored Cognition and Alex Turner’s Penalizing Impact via Attainable Utility Preservation, and a host of new write-ups by Stuart Armstrong. The post February 2019 Newsletter appeared first on Machine Intelligence Research Institute. Read more »
  • Thoughts on Human Models
    This is a joint post by MIRI Research Associate and DeepMind Research Scientist Ramana Kumar and MIRI Research Fellow Scott Garrabrant, cross-posted from the AI Alignment Forum and LessWrong. Human values and preferences are hard to specify, especially in complex domains. Accordingly, much AGI safety research has focused on approaches to AGI design that refer to human values and preferences indirectly, by learning a model that is grounded in expressions of human values (via stated preferences, observed behaviour, approval, etc.) and/or real-world processes that generate expressions of those values. There are additionally approaches aimed at modelling or imitating other aspects of human cognition or behaviour without an explicit aim of capturing human preferences (but usually in service of ultimately satisfying them). Let us refer to all these models as human models. In this post, we discuss several reasons to be cautious about AGI designs that use human models. We suggest that the AGI safety research community put more effort into developing approaches that work well in the absence of human models, alongside the approaches that rely on human models. This would be a significant addition to the current safety research landscape, especially if we focus on working out and trying concrete approaches as opposed to developing theory. We also acknowledge various reasons why avoiding human models seems difficult.   Problems with Human Models To be clear about human models, we draw a rough distinction between our actual preferences (which may not be fully accessible to us) and procedures for evaluating our preferences. The first thing, actual preferences, is what humans actually want upon reflection. Satisfying our actual preferences is a win. The second thing, procedures for evaluating preferences, refers to various proxies for our actual preferences such as our approval, or what looks good to us (with necessarily limited information or time for thinking). Human models are in the second category; consider, as an example, a highly accurate ML model of human yes/no approval on the set of descriptions of outcomes. Our first concern, described below, is about overfitting to human approval and thereby breaking its connection to our actual preferences. (This is a case of Goodhart’s law.) Less Independent Audits Imagine we have built an AGI system and we want to use it to design the mass transit system for a new city. The safety problems associated with such a project are well recognised; suppose we are not completely sure we have solved them, but are confident enough to try anyway. We run the system in a sandbox on some fake city input data and examine its outputs. Then we run it on some more outlandish fake city data to assess robustness to distributional shift. The AGI’s outputs look like reasonable transit system designs and considerations, and include arguments, metrics, and other supporting evidence that they are good. Should we be satisfied and ready to run the system on the real city’s data, and to implement the resulting proposed design? We suggest that an important factor in the answer to this question is whether the AGI system was built using human modelling or not. If it produced a solution to the transit design problem (that humans approve of) without human modelling, then we would more readily trust its outputs. If it produced a solution we approve of with human modelling, then although we expect the outputs to be in many ways about good transit system design (our actual preferences) and in many ways suited to being approved by humans, to the extent that these two targets come apart we must worry about having overfit to the human model at the expense of the good design. (Why not the other way around? Because our assessment of the sandboxed results uses human judgement, not an independent metric for satisfaction of our actual preferences.) Humans have a preference for not being wrong about the quality of a design, let alone being fooled about it. How much do we want to rely on having correctly captured these preferences in our system? If the system is modelling humans, we strongly rely on the system learning and satisfying these preferences, or else we expect to be fooled to the extent that a good-looking but actually bad transit system design is easier to compose than an actually-good design. On the other hand, if the system is not modelling humans, then the fact that its output looks like a good design is better evidence that it is in fact a good design. Intuitively, if we consider sampling possible outputs and condition on the output looking good (via knowledge of humans), the probability of it being good (via knowledge of the domain) is higher when the system’s knowledge is more about what is good than what looks good. Here is a handle for this problem: a desire for an independent audit of the system’s outputs. When a system uses human modelling, the mutual information between its outputs and the auditing process (human judgement) is higher. Thus, using human models reduces our ability to do independent audits. Avoiding human models does not avoid this problem altogether. There is still an “outer-loop optimisation” version of the problem. If the system produces a weird or flawed design in sandbox, and we identify this during an audit, we will probably reject the solution and attempt to debug the system that produced it. This introduces a bias on the overall process (involving multiple versions of the system over phases of auditing and debugging) towards outputs that fool our auditing procedure. However, outer-loop optimisation pressures are weaker, and therefore less worrying, than in-loop optimisation pressures. We would argue that the problem is much worse, i.e., the bias towards fooling is stronger, when one uses human modelling. This is because the relevant optimisation is in-loop instead and is encountered more often. As one more analogy to illustrate this point, consider a classic Goodhart’s law example of teaching to the test. If you study the material, then take a test, your test score reveals your knowledge of the material fairly well. If you instead study past tests, your test score reveals your ability to pass tests, which may be correlated with your knowledge of the material but is increasingly less likely to be so correlated as your score goes up. Here human modelling is analogous to past tests and actual preferences are analogous to the material. Taking the test is analogous to an audit, which we want to be independent from the study regimen. Risk from Bugs We might implement our first AGI system incorrectly in a mundane sense. Specifically, even if we fully develop a theory of safe or aligned AGI, we might fail to implement that theory due to bugs or problems with our implementation techniques. In this case, we would be relatively better off if the mutual information between the AGI’s knowledge and human preferences is low. We expect the system’s behaviour to be dependent on its knowledge in some way, and we expect implementation errors to shift the nature of that dependence away from our intentions and expectations. Incorrect behaviour that depends on human preferences seems more dangerous than incorrect behaviour that does not. Consider the space of AGI system implementations, under a metric like similarity to an intended design (equivalently: severity of deviation from the design due to bugs). We want all the points near the first AGI system we build to be safe, because we may end up with a slightly different design than intended for reasons such as being confused about what we are doing or making implementation errors. There are at least three ways in which the risk from bugs can manifest. Incorrectly Encoded Values: Supposing we intend the first use of AGI to be solving some bounded and well-specified task, but we misunderstand or badly implement it so much that what we end up with is actually unboundedly optimising some objective function. Then it seems better if that objective is something abstract like puzzle solving rather than something more directly connected to human preferences: consider, as a toy example, if the sign (positive/negative) around the objective were wrong. Manipulation: The earlier arguments for independent audits do not just apply to the specific tasks we would plan to audit, but also to any activities an AGI system might carry out that humans might disapprove of. Examples include finding ways to hack into our supposedly secure systems, hiding its intentions and activity from us, or outright manipulating us. These tasks are much easier with access to a good psychological model of humans, which can be used to infer what mistakes we might make, or what loopholes we might overlook, or how we might respond to different behaviour from the system. Human modelling is very close to human manipulation in design space. A system with accurate models of humans is close to a system which successfully uses those models to manipulate humans. Threats: Another risk from bugs comes not from the AGI system caring incorrectly about our values, but from having inadequate security. If our values are accurately encoded in an AGI system that cares about satisfying them, they become a target for threats from other actors who can gain from manipulating the first system. More examples and perspectives on this problem have been described here. The increased risk from bugs of human modelling can be summarised as follows: whatever the risk that AGI systems produce catastrophic outcomes due to bugs, the very worst outcomes seem more likely if the system was trained using human modelling because these worst outcomes depend on the information in human models. Less independent audits and the risk from bugs can both be mitigated by preserving independence of the system from human model information, so the system cannot overfit to that information or use it perversely. The remaining two problems we consider, mind crime and unexpected agents, depend more heavily on the claim that modelling human preferences increases the chances of simulating something human-like. Mind Crime Many computations may produce entities that are morally relevant because, for example, they constitute sentient beings that experience pain or pleasure. Bostrom calls improper treatment of such entities “mind crime”. Modelling humans in some form seems more likely to result in such a computation than not modelling them, since humans are morally relevant and the system’s models of humans may end up sharing whatever properties make humans morally relevant. Unexpected Agents Similar to the mind crime point above, we expect AGI designs that use human modelling to be more at risk of producing subsystems that are agent-like, because humans are agent-like. For example, we note that trying to predict the output of consequentialist reasoners can reduce to an optimisation problem over a space of things that contains consequentialist reasoners. A system engineered to predict human preferences well seems strictly more likely to run into problems associated with misaligned sub-agents. (Nevertheless, we think the amount by which it is more likely is small.)   Safe AGI Without Human Models is Neglected Given the independent auditing concern, plus the additional points mentioned above, we would like to see more work done on practical approaches to developing safe AGI systems that do not depend on human modelling. At present, this is a neglected area in the AGI safety research landscape. Specifically, work of the form “Here’s a proposed approach, here are the next steps to try it out or investigate further”, which we might term engineering-focused research, is almost entirely done in a human-modelling context. Where we do see some safety work that eschews human modelling, it tends to be theory-focused research, for example, MIRI’s work on agent foundations. This does not fill the gap of engineering-focused work on safety without human models. To flesh out the claim of a gap, consider the usual formulations of each of the following efforts within safety research: iterated distillation and amplification, debate, recursive reward modelling, cooperative inverse reinforcement learning, and value learning. In each case, there is human modelling built into the basic setup for the approach. However, we note that the technical results in these areas may in some cases be transportable to a setup without human modelling, if the source of human feedback (etc.) is replaced with a purely algorithmic, independent system. Some existing work that does not rely on human modelling includes the formulation of safely interruptible agents, the formulation of impact measures (or side effects), approaches involving building AI systems with clear formal specifications (e.g., some versions of tool AIs), some versions of oracle AIs, and boxing/containment. Although they do not rely on human modelling, some of these approaches nevertheless make most sense in a context where human modelling is happening: for example, impact measures seem to make most sense for agents that will be operating directly in the real world, and such agents are likely to require human modelling. Nevertheless, we would like to see more work of all these kinds, as well as new techniques for building safe AGI that does not rely on human modelling.   Difficulties in Avoiding Human Models A plausible reason why we do not yet see much research on how to build safe AGI without human modelling is that it is difficult. In this section, we describe some distinct ways in which it is difficult. Usefulness It is not obvious how to put a system that does not do human modelling to good use. At least, it is not as obvious as for the systems that do human modelling, since they draw directly on sources (e.g., human preferences) of information about useful behaviour. In other words, it is unclear how to solve the specification problem—how to correctly specify desired (and only desired) behaviour in complex domains—without human modelling. The “against human modelling” stance calls for a solution to the specification problem wherein useful tasks are transformed into well-specified, human-independent tasks either solely by humans or by systems that do not model humans. To illustrate, suppose we have solved some well-specified, complex but human-independent task like theorem proving or atomically precise manufacturing. Then how do we leverage this solution to produce a good (or better) future? Empowering everyone, or even a few people, with access to a superintelligent system that does not directly encode their values in some way does not obviously produce a future where those values are realised. (This seems related to Wei Dai’s human-safety problem.) Implicit Human Models Even seemingly “independent” tasks leak at least a little information about their origins in human motivations. Consider again the mass transit system design problem. Since the problem itself concerns the design of a system for use by humans, it seems difficult to avoid modelling humans at all in specifying the task. More subtly, even highly abstract or generic tasks like puzzle solving contain information about the sources/designers of the puzzles, especially if they are tuned for encoding more obviously human-centred problems. (Work by Shah et al. looks at using the information about human preferences that is latent in the world.) Specification Competitiveness / Do What I Mean Explicit specification of a task in the form of, say, an optimisation objective (of which a reinforcement learning problem would be a specific case) is known to be fragile: there are usually things we care about that get left out of explicit specifications. This is one of the motivations for seeking more and more high level and indirect specifications, leaving more of the work of figuring out what exactly is to be done to the machine. However, it is currently hard to see how to automate the process of turning tasks (vaguely defined) into correct specifications without modelling humans. Performance Competitiveness of Human Models It could be that modelling humans is the best way to achieve good performance on various tasks we want to apply AGI systems to for reasons that are not simply to do with understanding the problem specification well. For example, there may be aspects of human cognition that we want to more or less replicate in an AGI system, for competitiveness at automating those cognitive functions, and those aspects may carry a lot of information about human preferences with them in a hard to separate way.   What to Do Without Human Models? We have seen arguments for and against aspiring to solve AGI safety using human modelling. Looking back on these arguments, we note that to the extent that human modelling is a good idea, it is important to do it very well; to the extent that it is a bad idea, it is best to not do it at all. Thus, whether or not to do human modelling at all is a configuration bit that should probably be set early when conceiving of an approach to building safe AGI. It should be noted that the arguments above are not intended to be decisive, and there may be countervailing considerations which mean we should promote the use of human models despite the risks outlined in this post. However, to the extent that AGI systems with human models are more dangerous than those without, there are two broad lines of intervention we might attempt. Firstly, it may be worthwhile to try to decrease the probability that advanced AI develops human models “by default”, by promoting some lines of research over others. For example, an AI trained in a procedurally-generated virtual environment seems significantly less likely to develop human models than an AI trained on human-generated text and video data. Secondly, we can focus on safety research that does not require human models, so that if we eventually build AGI systems that are highly capable without using human models, we can make them safer without needing to teach them to model humans. Examples of such research, some of which we mentioned earlier, include developing human-independent methods to measure negative side effects, to prevent specification gaming, to build secure approaches to containment, and to extend the usefulness of task-focused systems.   Acknowledgements: thanks to Daniel Kokotajlo, Rob Bensinger, Richard Ngo, Jan Leike, and Tim Genewein for helpful comments on drafts of this post. The post Thoughts on Human Models appeared first on Machine Intelligence Research Institute. Read more »
  • Our 2018 Fundraiser Review
    true
    Our 2018 Fundraiser ended on December 31 with the five week campaign raising $951,8171 from 348 donors to help advance MIRI’s mission. We surpassed our Mainline Target ($500k) and made it more than halfway again to our Accelerated Growth Target ($1.2M). We’re grateful to all of you who supported us. Thank you!   // When you click on a level, ensure the right tab in the modal popup is selected. jQuery(document).ready(function($) { $('#level1Button').click(function(){ $('#fundraiserModalTabs a[href="#level1"]').tab('show'); }); $('#level2Button').click(function(){ $('#fundraiserModalTabs a[href="#level2"]').tab('show'); }); }); Target 1$500,000Completed Target 2 $1,200,000 In Progress Fundraiser Concluded 348 donors contributed × Target Descriptions Target 1 Target 2 $500k: Mainline target This target represents the difference between what we’ve raised so far this year, and our point estimate for business-as-usual spending next year. $1.2M: Accelerated growth target This target represents what’s needed for our funding streams to keep pace with our growth toward the upper end of our projections.   With cryptocurrency prices significantly lower than during our 2017 fundraiser, we received less of our funding (~6%) from holders of cryptocurrency this time around. Despite this, our fundraiser was a success, in significant part thanks to the leverage gained by MIRI supporters’ participation in multiple matching campaigns during the fundraiser, including WeTrust Spring’s Ethereum-matching campaign, Facebook’s Giving Tuesday event and professional poker player Dan Smith’s Double Up Drive, expertly administered by Raising for Effective Giving. Together with significant matching funds generated through donors’ employer matching programs, matching donations accounted for ~37% of the total funds raised during the fundraiser. 1. WeTrust Spring MIRI participated, along with 17 other non-profit organizations, in WeTrust’s Spring innovative ETH-matching event, which ran thru Giving Tuesday, November 27. The event was the first-ever implementation of Glen Weyl, Zoë Hitzig, and Vitalik Buterin’s Liberal Radicalism (LR) model for non-profit funding matching. Unlike most matching campaigns, which match exclusively based on total amount donated, this campaign matched in a way that heavily factored in the number of unique donors when divvying out the matching pool, a feature WeTrust highlighted as “Democratic Donation Matching”. During MIRI’s week-long campaign leading up to Giving Tuesday, some supporters went deep into trying to determine exactly what instantiation of the model WeTrust had created — how exactly DO the large donations provide leverage of 450% match rate for minimum donations of .1 ETH? Our supporters’ excitement about this new matching model was also evident in the many donations that were made — as WeTrust reported in their blog post, “MIRI, the Machine Intelligence Research Institute was the winner, clocking in 64 qualified donations totaling 147.751 ETH, then Lupus Foundation in second with 22 qualified donations and 23.851 total ETH.” Thanks to our supporters’ donations, MIRI received over 91% of the matching funds allotted by WeTrust and, all told, we received ETH worth more than $31,000 from the campaign. Thank you! 2. Facebook Giving Tuesday Event Some of our hardiest supporters set their alarms clocks extra early to support us in Facebook’s Giving Tuesday matching event, which kicked off at 5:00am EST on Giving Tuesday. Donations made before the $7M matching pool was exhausted were matched 1:1 by Facebook/PayPal, up to a maximum of $250,000 per organization, and a limit of $20,000 per donor, and $2500 per donation. MIRI supporters, some with our tipsheet in hand, pointed their browsers — and credit cards — at MIRI’s fundraiser Facebook Page (and another page set up by the folks behind the EA Giving Tuesday Donation Matching Initiative — thank you Avi and William!), and clicked early and often. During the previous year’s event, it took only 86 seconds for the $2M matching pool to be exhausted. This year saw a significantly larger $7M pool being exhausted dramatically quicker, sometime in the 16th second. Fortunately, before it ended, 11 MIRI donors had already made 20 donations totalling $40,072. Chart.defaults.global.elements.point.radius = 2.0 Chart.defaults.global.elements.point.hoverRadius = 3.5 new Chart(document.getElementById("line-chart"), { type: 'line', data: { labels: [0, 20, 40, 60], datasets: [{ data: [{ x: 0, y: 80947 }, { x: 1, y: 80947 }, { x: 1.1, y: 80947 }, { x: 2, y: 80947 }, { x: 2.1, y: 80947 }, { x: 2.2, y: 80947 }, { x: 2.3, y: 80947 }, { x: 3, y: 80947 }, { x: 4, y: 80947 }, { x: 5, y: 80947 }, { x: 5.1, y: 80947 }, { x: 5.2, y: 80947 }, { x: 5, y: 80947 }, { x: 6, y: 80947 }, { x: 6.1, y: 80947 }, { x: 6.2, y: 80947 }, { x: 7, y: 80947 }, { x: 8, y: 80947 }, { x: 10, y: 80947 }, { x: 11, y: 80947 }, { x: 13, y: 80947 }, { x: 14, y: 80947 }, { x: 15, y: 80947 }], label: "Match Event", borderColor: "rgb(65, 133, 244)", backgroundColor: "rgba(197, 218, 252, 0.5)", fill: true }, { data: [{ x: 0, y: 0 }, { x: 1, y: 0 }, { x: 1, y: 2499 }, { x: 2, y: 2557 }, { x: 2.1, y: 2707 }, { x: 2.2, y: 5206 }, { x: 2.3, y: 7705 }, { x: 3, y: 10199 }, { x: 4, y: 10415 }, { x: 5, y: 12914 }, { x: 5.1, y: 13159 }, { x: 5.2, y: 15658 }, { x: 5.3, y: 18157 }, { x: 6, y: 20157 }, { x: 6.1, y: 22647 }, { x: 6.2, y: 25097 }, { x: 7, y: 27596 }, { x: 8, y: 30095 }, { x: 10, y: 32585 }, { x: 11, y: 35084 }, { x: 13, y: 37583 }, { x: 14, y: 40073 }, { x: 15, y: 40073 }, { x: 16, y: 42572 }, { x: 19, y: 45071 }, { x: 21, y: 47570 }, { x: 30, y: 50069 }, { x: 32, y: 50082 }, { x: 41, y: 52581 }, { x: 48, y: 52831 }, { x: 51, y: 53076 }, { x: 51.1, y: 53826 }, { x: 56, y: 54071 }, { x: 60, y: 54071 }], label: "Donations", borderColor: "rgb(219, 67, 55)", backgroundColor: "rgba(245, 193, 136, 0.5)", fill: true }, { data: [{ x: 0, y: 0 }, { x: 1, y: 0 }, { x: 1, y: 4998 }, { x: 2, y: 5114 }, { x: 2.1, y: 5414 }, { x: 2.2, y: 10412 }, { x: 2.3, y: 15410 }, { x: 3, y: 20398 }, { x: 4, y: 20829 }, { x: 5, y: 25827 }, { x: 5.1, y: 26317 }, { x: 5.2, y: 31315 }, { x: 5.3, y: 36313 }, { x: 6, y: 40313 }, { x: 6.1, y: 45293 }, { x: 6.2, y: 50193 }, { x: 7, y: 55191 }, { x: 8, y: 60189 }, { x: 10, y: 65169 }, { x: 11, y: 70167 }, { x: 13, y: 75165 }, { x: 14, y: 80145 }, { x: 15, y: 80145 }, { x: 16, y: 82644 }, { x: 19, y: 85143 }, { x: 21, y: 87642 }, { x: 30, y: 90141 }, { x: 32, y: 90154 }, { x: 41, y: 92653 }, { x: 48, y: 92903 }, { x: 51, y: 93148 }, { x: 51.1, y: 93898 }, { x: 56, y: 94143 }, { x: 60, y: 94143 }], label: "Donations + Match", borderColor: "rgb(244, 180, 0)", backgroundColor: "rgba(252, 232, 178, 0.5)", fill: true } ] }, options: { defaultFontFamily: "'Source Sans Pro', Helvetica, Arial, sans-serif", legend: { onClick: (e) => e.stopPropagation() }, title: { padding: 40, fontSize: 17.5, fontColor: "#2e3f51", display: true, text: 'Facebook Donations to MIRI on Giving Tuesday (1st Minute)' }, scales: { xAxes: [{ type: 'linear', position: 'bottom', scaleLabel: { display: true, labelString: 'Seconds', }, }], yAxes: [{ scaleLabel: { display: true, labelString: 'Dollars (USD)', }, ticks: { // Include a dollar sign in the ticks callback: function(value, index, values) { return value.toLocaleString("en-US",{style:"currency", currency:"USD",minimumFractionDigits:0,maximumFractionDigits:0}); } } }] } } });   Overall, 66% of the $61,023 donated to MIRI on Facebook on Giving Tuesday was matched by Facebook/PayPal, resulting in a total of $101,095. Thank you to everyone who participated, especially the early risers who so effectively leveraged matching funds on MIRI’s behalf including Quinn Maurmann, Richard Schwall, Alan Chang, William Ehlhardt, Daniel Kokotajlo, John Davis, Herleik Holtan and others. You guys rock! You can read more about the general EA community’s fundraising performance on Giving Tuesday in Ari Norowitz’s retrospective on the EA Forum. 3. Double Up Drive Challenge Poker player Dan Smith and a number of his fellow professional players came together for another end-of-year Matching Challenge — once again administered by Raising for Effective Giving (REG), who have facilitated similar matching opportunities in years past. Starting on Giving Tuesday, November 27, $940,000 in matching funds was made available for eight charities focused on near-term causes (Malaria Consortium, GiveDirectly, Helen Keller International, GiveWell, Animal Charity Evaluators, Good Food Institute, StrongMinds and the Massachusetts Bail Fund); and, with the specific support of poker pro Aaron Merchak, $200,000 in matching funds was made available for two charities focused on improving the long-term future of our civilization, MIRI and REG. With the addition of an anonymous sponsor to Dan’s roster in early December, an extra $150,000 was added to the near-term causes pool and, then, a week later, after his win at the DraftKings World Championship, Tom Crowley followed through on his pledge to donate half of his total event winnings to the drive, adding significantly increased funding, $1.127M, to the drive’s overall matching pool as well as 2 more organizations — Against Malaria Foundation and EA Funds’ Long-Term Future Fund. The last few days of the Drive saw a whirlwind of donations being made to all organizations, causing the pool of $2.417M to be exhausted 24 hours before the declared end of the drive (December 29) at which point Martin Crowley came in to match all donations made in the last day, thus increasing the matched donations to over $2.7M. In total, MIRI donors had $229,000 matched during the event. We’re very grateful to all these donors, to Dan Smith for instigating this phenomenally successful event, and to his fellow sponsors and especially Aaron Merchak and Martin Crowley for matching donations made to MIRI. Finally, a big shout-out to REG for facilitating and administering so effectively – thank you Stefan and Alfredo! 4. Corporate Matching A number of MIRI supporters work at corporations that match contributions made by their employees to 501(c)(3) organizations like MIRI. During the duration of MIRI’s fundraiser, over $62,000 in matching funds from various Employee Matching Programs was leveraged by our supporters, adding to the significant matching corporate funds already leveraged during 2018 by these and other MIRI supporters.   We’re extremely grateful for all the support we received during this fundraiser, especially the effective leveraging of the numerous matching opportunities, and are excited about the opportunity it creates for us to continue to grow our research team. If you know of — or want to discuss — any giving/matching/support MIRI opportunities in 2019, please get in touch with me2 at colm@intelligence.org. Thank you! The exact total is still subject to change as we continue to process a small number of donations. Colm Ó Riain is MIRI’s Head of Growth. Colm coordinates MIRI’s philanthropic and recruitment strategy to support MIRI’s growth plans. The post Our 2018 Fundraiser Review appeared first on Machine Intelligence Research Institute. Read more »
WordPress RSS Feed Retriever by Theme Mason

Leave a Reply