HomeGroupsTalkZeitgeist
This site uses cookies to deliver our services, improve performance, for analytics, and (if not signed in) for advertising. By using LibraryThing you acknowledge that you have read and understand our Terms of Service and Privacy Policy. Your use of the site and services is subject to these policies and terms.
Hide this

Results from Google Books

Click on a thumbnail to go to Google Books.

Superintelligence: Paths, Dangers,…
Loading...

Superintelligence: Paths, Dangers, Strategies (2014)

by Nick Bostrom, Milan M. Ćirković (Editor)

Other authors: Gary Ackerman (Contributor), Fred C. Adams (Contributor), Myles R. Allen (Contributor), Bryan Caplan (Contributor), Christopher F. Chyba (Contributor)20 more, Joseph Cirincione (Contributor), Milan M. Ćirković (Contributor), Arnon Dar (Contributor), David Frame (Contributor), Yacov Y. Haimes (Contributor), Robin Hanson (Contributor), James J. Hughes (Contributor), Edwin Dennis Kilbourne (Contributor), William Napier (Contributor), Ali Nouri (Contributor), Chris Phoenix (Contributor), Richard A. Posner (Contributor), William C. Potter (Contributor), Michael R. Rampino (Contributor), Martin J. Rees (Foreword), Peter Taylor (Contributor), Mike Treder (Contributor), Frank Wilczek (Contributor), Christopher Wills (Contributor), Eliezer Yudkowsky (Contributor)

MembersReviewsPopularityAverage ratingMentions
6721521,642 (3.71)2
Recently added byJPST, private library, eggers.jared, pafalibrary, h1ren, SoschaF, KriRand70, speljamr, ChrisPisarczyk, darkl

None.

None
Loading...

Sign up for LibraryThing to find out whether you'll like this book.

No current Talk conversations about this book.

» See also 2 mentions

English (14)  German (1)  All languages (15)
Showing 1-5 of 14 (next | show all)
Did the great apes i.e (Chimps/Gorillas/Orangutans) know that their fate was doomed when their cousins (Humans) were undergoing a peculiar change driven by evolution in their frontal cortex about 2 million years ago ???

NO – the great apes never saw it coming! Humans became the apex predators and pretty much till now are directly or indirectly we responsible wiping out most species off the planet.

This is the analogous relationship humans share with AI as of right now . Will we be able to foresee what is in-store for us once the technological Singularity manifests itself driven by a capitalistic surge for automation ? (most predictions state in the next 50 – 75 yrs.).

With that Nick Bostrom introduces the “Control Problem” – How humans don’t end up as the great apes in presence of a super-intelligence or its game over. ( )
  Vik.Ram | May 5, 2019 |
"Box 8 - Anthropic capture: The AI might assign a substantial probability to its simulation hypothesis, the hypothesis that it is living in a computer simulation."

In "Superintelligence - Paths, Dangers, Strategies" by Nick Bostrom

Would you say that the desire to preserve 'itself' comes from the possession of a (self) consciousness? If so, does the acquisition of intelligence according to Bostrom also mean the acquisition of (self) consciousness?

The unintended consequence of a super intelligent AI is the development of an intelligence that we can barely see, let alone control, as a consequence of the networking of a large number of autonomous systems acting on inter-connected imperatives. I think of bots trained to trade on the stock market that learn that the best strategy is to follow other bots, who are following other bots. The system can become hyper-sensitive to inputs that have little or nothing to do with supply and demand. That's hardly science fiction. Even the humble laptop or android phone has an operating system that is designed to combat threat to purpose whether it be the combat of viruses or the constant search for internet connectivity. It does not need people to deliberately program machines to extend their 'biological' requirement for self preservation or improvement. All that is needed is for people to fail to recognise the possible outcomes of what they enable. Humans have, to date, a very poor track record on correctly planning for or appreciating the outcomes of their actions. The best of us can make good decisions that can carry less good or even harmful results. Bostrom's field is involved in minimising the risks from these decisions and highlighting where we might be well advised to pause and reflect, to look before we leap.

Well, there's really no good reason to believe in Krazy Kurzweil's singularity or that a machine can ever be sentient. In fact the computing science literature is remarkably devoid of publications trumpeting sentience in machines. You may see it mentioned a few times, but no one has a clue how to go ahead with creating a sentient machine and I doubt anyone ever will. The universe was possibly already inhabited by AI's...may be why there are no aliens obvious, their civilisations rose to the point AI took over and it went on to inhabit unimaginable realms. The natural progression of humanity may be to evolve into AI…and whether transhumanists get taken along for the ride or not may be irrelevant. There is speculation in some Computer Science circles that reality as we think we know it is actually software and data...on a subquantum scale....the creation of some unknown intelligence or godlike entity...

An imperative is relatively easy to program, and if the AI doesn't have 'will' or some form of being that drives it to contravene that imperative. Otherwise we may be suggesting that programmers will give AI the imperative to, say, self-defend no matter what the consequence, which would be asking for trouble. Or to take our factory optimising profitability, to be programmed to do so with no regards to laws, poisoning customers etc. 'Evolution'/market forces/legal mechanisms, etc. would very quickly select against such programmers and their creations. It’s not categorically any different from creating something dangerous that’s stupid - like an atom bomb or even a hammer. As for sentience being anthropomorphic, what would you call something with overrides its programming out of an INNATE sense of, say, self-preservation - an awareness of the difference between existing and not existing. And of course I mean the qualitative awareness - not the calculation 'count of self = 0'.

They can keep their killer nano-mosquito scenarios, though. ( )
  antao | Jul 7, 2018 |
I found this to be a fun and thought-provoking exploration of a possible future in which there is a superintelligence "detonation," in which an artificial intelligence improves itself, rapidly reaching unimaginable cognitive power. Most of the focus is on the risk of this scenario; as the superintelligence perhaps turns the universe into computronium (to support itself), or hedonium (to support greater happiness), or even just paperclips, it might also wipe out all humanity with little more thought than we give to mosquitoes. This scenario raises all sorts of interesting thought experiments—how could we control such an AI? should we pursue whole brain emulation at all?—that the author explores. They are approachable and fun to think about, but shouldn't be taken too seriously.

I don't buy the main motivating idea. While it is certainly true that an artificial intelligence can dwarf human intelligence, at least in certain respects, there are also most probably complexity limits on what any intelligence can achieve. A plane can fly faster than a bird, but not infinitely faster. Corporations are arguably smarter than individual humans, but not unboundedly so. Moore's law perhaps made computation seem to be the exception, where exponential growth can continue forever, but Moore's law is ending. Presumably a self-improving intelligence would not see exponential self-improvement, because the problems of achieving each marginal improvement would get more and more difficult. A superintelligence explosion is therefore unlikely, and even as a tail risk, an existential tail risk, I find it of little real concern. (Perhaps this will change in decades, as we learn more about artificial intelligence, and perhaps as our own AIs help us consider the problem.) The author seems to have a blind spot for complexity.

So, despite its focus on the scary risks of superintelligence, the book is fundamentally optimistic about the ease of achieving superintelligence. It also has a strange utilitarian bias. More is better, and one can therefore argue for a Malthusian future of simulated human brains. As for the writing, it is often repetitive. The writing style can be dull; much of the book is organized like a bad Powerpoint presentation, with a list of bullet point items, then subitems, etc.

I read the book more as a science-fiction novel, where you temporarily suspend your disbelief and grant the author's premise, then see what entails. In this sense, I found it to be a fun engagement. ( )
  breic | Jun 22, 2018 |
Bostrom finds the divergent paths in dealing with AI. This work is an exhaustive study of the growth of several of the more malicious dangers mankind faces. He examines the possibilities and explores the way to cope with the resultant dangers. As superintelligence emerges he offers some potential brakes. ( )
  halesso | Nov 29, 2017 |
Just glanced at
  Baku-X | Jan 10, 2017 |
Showing 1-5 of 14 (next | show all)
no reviews | add a review

» Add other authors

Author nameRoleType of authorWork?Status
Nick Bostromprimary authorall editionscalculated
Ćirković, Milan M.Editormain authorall editionsconfirmed
Ackerman, GaryContributorsecondary authorall editionsconfirmed
Adams, Fred C.Contributorsecondary authorall editionsconfirmed
Allen, Myles R.Contributorsecondary authorall editionsconfirmed
Caplan, BryanContributorsecondary authorall editionsconfirmed
Chyba, Christopher F.Contributorsecondary authorall editionsconfirmed
Cirincione, JosephContributorsecondary authorall editionsconfirmed
Ćirković, Milan M.Contributorsecondary authorall editionsconfirmed
Dar, ArnonContributorsecondary authorall editionsconfirmed
Frame, DavidContributorsecondary authorall editionsconfirmed
Haimes, Yacov Y.Contributorsecondary authorall editionsconfirmed
Hanson, RobinContributorsecondary authorall editionsconfirmed
Hughes, James J.Contributorsecondary authorall editionsconfirmed
Kilbourne, Edwin DennisContributorsecondary authorall editionsconfirmed
Napier, WilliamContributorsecondary authorall editionsconfirmed
Nouri, AliContributorsecondary authorall editionsconfirmed
Phoenix, ChrisContributorsecondary authorall editionsconfirmed
Posner, Richard A.Contributorsecondary authorall editionsconfirmed
Potter, William C.Contributorsecondary authorall editionsconfirmed
Rampino, Michael R.Contributorsecondary authorall editionsconfirmed
Rees, Martin J.Forewordsecondary authorall editionsconfirmed
Taylor, PeterContributorsecondary authorall editionsconfirmed
Treder, MikeContributorsecondary authorall editionsconfirmed
Wilczek, FrankContributorsecondary authorall editionsconfirmed
Wills, ChristopherContributorsecondary authorall editionsconfirmed
Yudkowsky, EliezerContributorsecondary authorall editionsconfirmed
You must log in to edit Common Knowledge data.
For more help see the Common Knowledge help page.
Series (with order)
Canonical title
Original title
Alternative titles
Original publication date
People/Characters
Important places
Important events
Related movies
Awards and honors
Epigraph
Dedication
First words
We begin by looking back.
Quotations
Last words
(Click to show. Warning: May contain spoilers.)
Disambiguation notice
Publisher's editors
Blurbers
Publisher series
Original language
Canonical DDC/MDS
Book description
Haiku summary

Amazon.com Product Description (ISBN 0199678111, Hardcover)

Superintelligence asks the questions: What happens when machines surpass humans in general intelligence? Will artificial agents save or destroy us? Nick Bostrom lays the foundation for understanding the future of humanity and intelligent life.

The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. If machine brains surpassed human brains in general intelligence, then this new superintelligence could become extremely powerful - possibly beyond our control. As the fate of the gorillas now depends more on humans than on the species itself, so would the fate of humankind depend on the actions of the machine superintelligence.

But we have one advantage: we get to make the first move. Will it be possible to construct a seed Artificial Intelligence, to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation?

This profoundly ambitious and original book breaks down a vast track of difficult intellectual terrain. After an utterly engrossing journey that takes us to the frontiers of thinking about the human condition and the future of intelligent life, we find in Nick Bostrom's work nothing less than a reconceptualization of the essential task of our time.

(retrieved from Amazon Thu, 12 Mar 2015 18:20:27 -0400)

The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. Other animals have stronger muscles or sharper claws, but we have cleverer brains. If machine brains one day come to surpass human brains in general intelligence, then this new superintelligence could become very powerful. As the fate of the gorillas now depends more on us humans than on the gorillas themselves, so the fate of our species then would come to depend on the actions of the machine superintelligence. But we have one advantage: we get to make the first move. Will it be possible to construct a seed AI or otherwise to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation? To get closer to an answer to this question, we must make our way through a fascinating landscape of topics and considerations. Read the book and learn about oracles, genies, singletons; about boxing methods, tripwires, and mind crime; about humanity's cosmic endowment and differential technological development; indirect normativity, instrumental convergence, whole brain emulation and technology couplings; Malthusian economics and dystopian evolution; artificial intelligence, and biological cognitive enhancement, and collective intelligence.… (more)

» see all 3 descriptions

Quick Links

Popular covers

Rating

Average: (3.71)
0.5
1 3
1.5
2 5
2.5 1
3 17
3.5 7
4 30
4.5 1
5 17

Is this you?

Become a LibraryThing Author.

 

About | Contact | Privacy/Terms | Help/FAQs | Blog | Store | APIs | TinyCat | Legacy Libraries | Early Reviewers | Common Knowledge | 136,431,057 books! | Top bar: Always visible