UFO Conjectures

Thursday, January 29, 2015

Now Bill Gates joins Elon Musk and others in a concern about computers and Artificial Intelligence



  • I would consider the internet a machine, one that we voluntarily feed which could very easily turn against us. We live in an era where inter-connectivity of institutions combined with the increased velocity of data has made predictability of their failure in advance, an impossibility. It isn't too big to fail, it's too inter-connected.
    GM, AIG and Bernie M's scheme failed all in one day. The impossible becomes possible unpredictably due to the velocity of what we set in motion simply because we could. The same applies to AI in my book.

    By Blogger Bruce Duensing, at Thursday, January 29, 2015  

  • I'm also worried about the possibility of AI. However most people assume too much- like it's going to be this entity concerned with is own self preservation- therefore waging war on humans to save itself. However, self preservation I'm not sure is linked to intelligence. I think it's a biological imperative. A conscience machine may not care whether it exists or not. However, depending on it's function, it's mode of operation- it may manipulate things, information, data, and in turn indirectly destroy our society or make it worse. So my thought is- does being self aware also mean self preservation?

    By Blogger Daniel Hurd, at Thursday, January 29, 2015  

  • Fellows:

    The 1970 TV movie, Colossus, The Forbin Project and CBS's current show, Person of Interest deal with Artificial Intelligent computers getting pushy.

    The logic of computers, as Marvin Minsky elaborates in his book Computers: Finite and Infinite Machines (which has a PDF outline here: http://www.cba.mit.edu/events/03.11.ASE/docs/Minsky.pdf) is a thing not unlike human thought, as Turing's machines tried to replicate.

    The outcome seems inevitable: computers (or machines based upon computation) will be able to think (and act) like humans and when that happens, we may be doomed as a species.

    That's the fear with which these geniuses are dealing.


    By Blogger RRRGroup, at Thursday, January 29, 2015  

  • I wrote a piece some time ago about two societies competing for natural resources and as Darwin said evolution requires cooperation as well. One is the synthetic creations we have based on tool making copying nature, and the other is ourselves.I think Marshal McLuhan was way ahead of the curve when he observed we have become extensions of the machinery we created versus the other way..but then again Charlie Chaplin illustrated the same premise in "Modern Times"
    Making things better by making them worse.

    By Blogger Bruce Duensing, at Thursday, January 29, 2015  

  • Rich, I think what is really worrisome is the lack of control over such a creation. It's the paradox which scares me- creating something which we cannot control and is significantly, possibly even infinitely more intelligent than ourselves. With that said, if a human like AI was created could it not also be compassionate and understanding? Could it be like us? Could we coexist?
    However, I agree with the alarmists on the subject. There should be strict guidelines, and the creation, if it should take place should be under the tightest precautions.

    By Blogger Daniel Hurd, at Thursday, January 29, 2015  

  • The problem, Daniel, is that emotion, compassion, and all the good human attributes are lacking, and pure (computational) logic is the premise of Artificial Intelligence.

    There's no way to input the better qualities of humanity in a 1 0 syntax.

    Decisions by computers will make and do make decisions that are, by digital necessity, without gray areas, where emotions reside.

    That's the problem, and fear.


    By Blogger RRRGroup, at Thursday, January 29, 2015  

  • Well that's the thing, intelligence requires one to see the gray areas to have abstract thoughts. Otherwise isn't it just a machine doing a job it's programmed to do? Then we would still have control. So if it's simply programming, as in some kind of super Siri, we would only be as doomed as we allowed ourselves to be. What kind of power would we give such a device? Do we create pilots so that we can be passengers?

    By Blogger Daniel Hurd, at Thursday, January 29, 2015  

  • Rich
    I agree, there is no statistical metric that any program can match against "intangibles". Stuart Hammeroff called the concept of AI "zombie intelligence" A puppet mimicking this or that. It is a frightening prospect. Another is the prospect of someone's finger hitting the "enter" key and bringing down a nuclear plant or a grid.

    By Blogger Bruce Duensing, at Thursday, January 29, 2015  

  • People here are immediately jumping to the conclusion that the threat from AI is AI becoming self-aware and self-authorizing and then deciding that humans are expendable.

    I don't think that that's what Bill Gates and Elon are worried about. Before we get to that point, we will have to pass through the phase where AI--still in the service of those who can access it and use it--will render its possessors essentially undefeatable and in control. It will be like when Europeans armed with gunpowder powered firearms encountered the tribal world armed with sticks and stones. Bringing a knife to a gunfight usually results in extinction for the knife wielder. This scenario is basically Artificial Intelligence in the service of Natural Stupidity.

    By Blogger Larry, at Thursday, January 29, 2015  

  • Larry
    I think we made the decision ourselves that we are expendable simply by what Kurt Vonnegut called bureaucratic momentum when talking about the unnecessary fire bombing of Dresden. One wag called this dynamic "volunteered slavery"in terms of applying technological advances simply because we can. No one is in the driver's seat. Your point is well taken.

    By Blogger Bruce Duensing, at Friday, January 30, 2015  

  • I was laboring under the concept that as last century was about nuclear, this new century would be about biology. Perhaps this AI thing trumps all -- it's a race to see which one gets us first, the AI singularity or runaway genetic engineering.

    By Blogger Ron, at Friday, January 30, 2015  

  • RICH said: "The outcome seems inevitable: computers will be able to think (and act) like humans and when that happens, we may be doomed as a species."

    Taking that to its logical conclusion:

    Therefore ... Earth has not been visited by advanced civilisations because their computers inevitably achieved self-awareness and wiped out those ever-distant aliens.


    By Blogger Terry the Censor, at Wednesday, February 04, 2015  

Post a Comment

<< Home