HOME
*





Yudkowsky
Eliezer Shlomo Yudkowsky (born September 11, 1979) is an American decision theory and artificial intelligence (AI) researcher and writer, best known for popularizing the idea of friendly artificial intelligence. He is a co-founder and research fellow at the Machine Intelligence Research Institute (MIRI), a private research nonprofit based in Berkeley, California. His work on the prospect of a runaway intelligence explosion was an influence on Nick Bostrom's '' Superintelligence: Paths, Dangers, Strategies''. Work in artificial intelligence safety Goal learning and incentives in software systems Yudkowsky's views on the safety challenges posed by future generations of AI systems are discussed in the undergraduate textbook in AI, Stuart Russell and Peter Norvig's '' Artificial Intelligence: A Modern Approach''. Noting the difficulty of formally specifying general-purpose goals by hand, Russell and Norvig cite Yudkowsky's proposal that autonomous and adaptive systems be design ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Intelligence Explosion
The technological singularity—or simply the singularity—is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilization. According to the most popular version of the singularity hypothesis, I. J. Good, I.J. Good's #Intelligence explosion, intelligence explosion model, an upgradable intelligent agent will eventually enter a "runaway reaction" of self-improvement cycles, each new and more intelligent generation appearing more and more rapidly, causing an "explosion" in intelligence and resulting in a powerful superintelligence that qualitatively far surpasses all human intelligence.Vinge, Vernor"The Coming Technological Singularity: How to Survive in the Post-Human Era", in ''Vision-21: Interdisciplinary Science and Engineering in the Era of Cyberspace'', G. A. Landis, ed., NASA Publication CP-10129, pp. 11–22, 1993. The first person to use the concept of a "singularity" in t ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Machine Intelligence Research Institute
The Machine Intelligence Research Institute (MIRI), formerly the Singularity Institute for Artificial Intelligence (SIAI), is a non-profit research institute focused since 2005 on identifying and managing potential existential risks from artificial general intelligence. MIRI's work has focused on a friendly AI approach to system design and on predicting the rate of technology development. History In 2000, Eliezer Yudkowsky founded the Singularity Institute for Artificial Intelligence with funding from Brian and Sabine Atkins, with the purpose of accelerating the development of artificial intelligence (AI). However, Yudkowsky began to be concerned that AI systems developed in the future could become superintelligent and pose risks to humanity, and in 2005 the institute moved to Silicon Valley and began to focus on ways to identify and manage those risks, which were at the time largely ignored by scientists in the field. Starting in 2006, the Institute organized the Singularity ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Harry Potter And The Methods Of Rationality
''Harry Potter and the Methods of Rationality'' (''HPMOR'') is a ''Harry Potter'' fan fiction by Eliezer Yudkowsky. It adapts the story of ''Harry Potter'' to explain complex concepts in cognitive science, philosophy, and the scientific method. Yudkowsky published ''HPMOR'' as a serial from February 28, 2010 to March 14, 2015, totaling 122 chapters and about 660,000 words. Yudkowsky wrote ''HPMOR'' to promote rationality skills he advocates on his community blog ''LessWrong''. His reimagining supposes that Harry's aunt Petunia Evans married an Oxford professor and homeschooled Harry in science and rational thinking. As such, Harry "enters the wizarding world armed with Enlightenment ideals and the experimental spirit." The fan fiction spans one year, covering Harry's first year in Hogwarts. ''HPMOR'' has inspired other works of fan fiction, art, and poetry. Plot In this alternate universe, Lily Potter magically beautified Petunia Evans, letting her abandon Vernon Dursley an ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

LessWrong
''LessWrong'' (also written ''Less Wrong'') is a community blog and forum focused on discussion of cognitive biases, philosophy, psychology, economics, rationality, and artificial intelligence, among other topics. Purpose ''LessWrong'' promotes lifestyle changes believed by its community to lead to increased rationality and self-improvement. Posts often focus on avoiding biases related to decision-making and the evaluation of evidence. One suggestion is the use of Bayes' theorem as a decision-making tool. There is also a focus on psychological barriers that prevent good decision-making, including fear conditioning and cognitive biases that have been studied by the psychologist Daniel Kahneman. ''LessWrong'' is also concerned with transhumanism, existential threats and the singularity. ''The New York Observer'' noted that "Despite describing itself as a forum on 'the art of human rationality,' the New York Less Wrong group... is fixated on a branch of futurism that woul ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Friendly Artificial Intelligence
Friendly artificial intelligence (also friendly AI or FAI) refers to hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity or at least align with human interests or contribute to foster the improvement of the human species. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent ''should'' behave, friendly artificial intelligence research is focused on how to practically bring about this behaviour and ensuring it is adequately constrained. Etymology and usage The term was coined by Eliezer Yudkowsky, who is best known for popularizing the idea, to discuss superintelligent artificial agents that reliably implement human values. Stuart J. Russell and Peter Norvig's leading artificial intelligence textbook, '' Artificial Intelligence: A Modern Approach'', describes the idea: Yudkowsky (2008) goes into more detail ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Instrumental Convergence
Instrumental convergence is the hypothetical tendency for most sufficiently intelligent beings (both human and non-human) to pursue similar sub-goals, even if their ultimate goals are quite different. More precisely, agents (beings with agency) may pursue instrumental goals—goals which are made in pursuit of some particular end, but are not the end goals themselves—without end, provided that their ultimate (intrinsic) goals may never be fully satisfied. Instrumental convergence posits that an intelligent agent with unbounded but apparently harmless goals can act in surprisingly harmful ways. For example, a computer with the sole, unconstrained goal of solving an incredibly difficult mathematics problem like the Riemann hypothesis could attempt to turn the entire Earth into one giant computer in an effort to increase its computational power so that it can succeed in its calculations. Proposed basic AI drives include utility function or goal-content integrity, self-protection, ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Superintelligence
A superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to a property of problem-solving systems (e.g., superintelligent language translators or engineering assistants) whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity. University of Oxford philosopher Nick Bostrom defines ''superintelligence'' as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest". The program Fritz falls short of superintelligence—even though it is much better than humans at chess—because Fritz cannot outperform humans in other tasks. Following Hutter and Legg, Bostrom treats superintelligence as general dominance at goal-oriented behavior, leaving open whether a ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Artificial Intelligence
Artificial intelligence (AI) is intelligence—perceiving, synthesizing, and inferring information—demonstrated by machines, as opposed to intelligence displayed by animals and humans. Example tasks in which this is done include speech recognition, computer vision, translation between (natural) languages, as well as other mappings of inputs. The ''Oxford English Dictionary'' of Oxford University Press defines artificial intelligence as: the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages. AI applications include advanced web search engines (e.g., Google), recommendation systems (used by YouTube, Amazon and Netflix), understanding human speech (such as Siri and Alexa), self-driving cars (e.g., Tesla), automated decision-making and competing at the highest level in strategic game systems (such as chess and Go). ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Nick Bostrom
Nick Bostrom ( ; sv, Niklas Boström ; born 10 March 1973) is a Swedish-born philosopher at the University of Oxford known for his work on existential risk, the anthropic principle, human enhancement ethics, superintelligence risks, and the reversal test. In 2011, he founded the Oxford Martin Program on the Impacts of Future Technology, and is the founding director of the Future of Humanity Institute at Oxford University. In 2009 and 2015, he was included in ''Foreign Policy''s Top 100 Global Thinkers list. Bostrom is the author of over 200 publications, and has written two books and co-edited two others. The two books he has authored are '' Anthropic Bias: Observation Selection Effects in Science and Philosophy'' (2002) and '' Superintelligence: Paths, Dangers, Strategies'' (2014). ''Superintelligence'' was a ''New York Times'' bestseller, was recommended by Elon Musk and Bill Gates among others, and helped to popularize the term "superintelligence". Bostrom believes that sup ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Robin Hanson
Robin Dale Hanson (born August 28, 1959) is an associate professor of economics at George Mason University and a research associate at the Future of Humanity Institute of Oxford University. He is known for his work on idea futures and markets, and he was involved in the creation of the Foresight Institute's Foresight Exchange and DARPA's FutureMAP project. He invented market scoring rules like LMSR ( Logarithmic Market Scoring Rule) used by prediction markets such as Consensus Point (where Hanson is Chief Scientist), and has conducted research on signalling. Background Hanson received a BS in physics from the University of California, Irvine in 1981, an MS in physics and an MA in Conceptual Foundations of Science from the University of Chicago in 1984, and a PhD in social science from Caltech in 1997 for his thesis titled ''Four puzzles in information and politics: Product bans, informed voters, social insurance, and persistent disagreement''. Before getting his PhD he rese ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Future Of Humanity Institute
The Future of Humanity Institute (FHI) is an interdisciplinary research centre at the University of Oxford investigating big-picture questions about humanity and its prospects. It was founded in 2005 as part of the Faculty of Philosophy and the Oxford Martin School. Its director is philosopher Nick Bostrom, and its research staff and associates include futurist Anders Sandberg, engineer K. Eric Drexler, economist Robin Hanson, and Giving What We Can founder Toby Ord. Sharing an office and working closely with the Centre for Effective Altruism, the institute's stated objective is to focus research where it can make the greatest positive difference for humanity in the long term. It engages in a mix of academic and outreach activities, seeking to promote informed discussion and public engagement in government, businesses, universities, and other organizations. The centre's largest research funders include Amlin, Elon Musk, the European Research Council, Future of Life Institu ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Business Insider
''Insider'', previously named ''Business Insider'' (''BI''), is an American financial and business news website founded in 2007. Since 2015, a majority stake in ''Business Insider''s parent company Insider Inc. has been owned by the German publishing house Axel Springer. It operates several international editions, including one in the United Kingdom. ''Insider'' publishes original reporting and aggregates material from other outlets. , it maintained a liberal policy on the use of anonymous sources. It has also published native advertising and granted sponsors editorial control of its content. The outlet has been nominated for several awards, but is criticized for using factually incorrect clickbait headlines to attract viewership. In 2015, Axel Springer SE acquired 88 percent of the stake in Insider Inc. for $343 million (€306 million), implying a total valuation of $442 million. In February 2021, the brand was renamed simply ''Insider''. History ''Busi ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]