UFO Conjecture(s)

Thursday, May 19, 2016

Artificial Intelligence in [classic] Science Fiction

My academic pal, Bryan Sentes, who teaches at Dawson College in Montreal, doesn’t take kindly to his friends and others who are enchanted or afraid of the current splurge in conjectures about Artificial Intelligence.

Yet, the great and not-so-great Sci-Fi writers have been enraptured by the idea of AI and here is a list of those brilliant writers from The Science Fiction Encyclopedia edited by Peter Nicholls [Doubleday, Garden City, NY, 1979], Pages 133-134, under the Computers rubric. There are more under Robots and Machines:

Edward Page Mitchell’s The Ablest Man in the World [1879]
Edmond Hamilton’s The Mental Giants [1928]
John W. Campbell’s The Mental Horde [1930]
Miles J. Breuer’s Paradise and Iron [1930]
Don A, Stewart’s The Machine [1935]
Isaac Asimov’s The Evitable Conflict [1950]
Francis G. Rayer’s Tomorrow Sometimes Comes [1951]
Arthur C, Clarke’s The Nine Billion Names of God [1953]
Frederic Brown’s Answer [1954]
Isaac Asimov’s The Last Question [1956]
Pierre Boulle’s The Man Who Hated Machines [1957]
Mark Clifton and Frank Riley’s The Forever Machine [1957]
Philip K. Dick’s Vulcan’s Hammer [1960]
Dino Buzzati’s Larger than Life [1960]
Michael Frayn’s The Tin Men [1965]
Gordon R. Dickson’s Computers Don’t Argue [1965]
Frank Herbert’s Destination: Void [1966]
Robert Escarpit’s The Novel Computer [1966]
Olof Johannesson’s The Great Computer [1966]
D. F. Jones’ Colossus [1966]
Robert Heinlein’s The Moon is a Harsh Mistress [1966]
Harlan Ellison’s I Have No Mouth, and I Must Scream [1967]
Martin Caidin’s The God Machine [1968]
Robert Silverberg’s Going Down Smooth [1968]
Charles Harness’ The Ring of Ritornel [1968]
Ira Levin’s This Perfect Day [1970]
D. G. Compton’s The Steel Crocodile [1970]
R.A. Lafferty’s Arrive at Easterwine [1971]
David Gerrold’s When Harlie was One [1972]
James Blish’s Midsummer Century [1972]
Isaac Asmov’s The Life and Times of Multivac [1975]
John Brunner’s The Shockwave Rider [1975]
Chris Boyce Catchworld [1975]
Frederik Poul’s Man Plus [1976]
Algis Budrys’ Michaelmas [1977]

Two anthologies are noted:

Science Fiction Thinking Machines [1954], edited by Groff Conklin
Computers, Computers, Computers: In Fiction and in Verse [1977], edited by D. Van Tassel

N.B. Italic listings above are stories in pulp magazines and Bold Face indicates books

The date of The Encyclopedia … doesn’t allow for all the books and stories published after 1979, which are ample.

Not to heed the prescience of Sci-Fi writers (as above) or the concerns of extant notables (Elon Musk, Stephen Hawking, Nick Bostrom, Ray Kurzweil, et al) about the evolution of AI (Artificial Intelligence) or Thinking Machines seems short-sighted to me, but I know, my buddy, Bryan is absorbed by the Germanic Romanticists of the 18/19th Centuries, so I forgive him his disdain of the AI barrage here and all over the place.

RR

33 Comments:

  • 1. Romanticism (first?) imagined artificial life, whether biological (Frankenstein) or mechanical (ETA Hoffman's "Automata"). 2. I'm impatient with on the one hand cheerleading headlines that hyperbolically claim that teachers, lawyers, or physicists (most recently) have been "replaced" by some software, and on the other the way certain metaphorical ways of thinking insinuate themselves as factual ways of thinking, e.g., syntactic systems (software) being spoken of as exhibiting semantic, intentional human or animal intelligence (e.g. reading or writing), or that biological intelligence is digital (e.g., the brain is a computer). And what is almost always left out is whose interests are served by AI, which is a social, political-economical concern. What is curious is how the mind that invents a sublimely stupid machine (just very fast at making simple decisions) is so impressed by its invention that it imagines that its invention somehow explains itself or may even, therefore, transcend itself. Of course it all fuels the imagination, but as the claims for AI move into the real world, as what has been imagined becomes real, as it were, said claims call for some serious reflection and critique...

    By Blogger Bryan Sentes, at Thursday, May 19, 2016  

  • Some of us, buddy boy, have been traumatized by an old TV movie -- Colossus: The Forbin Project -- and can't help ourselves.

    RR

    By Blogger RRRGroup, at Thursday, May 19, 2016  

  • Like I wrote, this whole matter calls for some serious reflection and critique...

    By Blogger Bryan Sentes, at Thursday, May 19, 2016  

  • Oh it's coming....stand by....

    RR

    By Blogger RRRGroup, at Thursday, May 19, 2016  

  • You missed one which "fits" the super-duper computer: H.Beam Pipers "The Cosmic Computer" (1963) originally "Junkyard Planet" written from a short story "The Grave yard of Dreams" (1958). Its major MacGuffin is "Merlin", a military computer that could be programmed to predict the future.

    Piper also had a series of "parallel universe / time travel" stories which became known as the "Paratime" stories / novels [Piper's explanation UFO sightings had to do with the mechanisms that transferred materials and people between the parallel world "colonies"]

    If you want to add computers creating simulated worlds, here are two more:

    "Simulacron-3" (1964) Daniel F. Galouye. I nearly "fell off the floor" when I first read this book in 1968 for the first time. Simulacron was the first time I ran in to the idea that "reality might be a simulation". This was made into a rather poor movie "The 13th Floor".

    "Words Made Flesh" (1988, 2nd ed 2003) by Ramsey Dukes [pen name of Lionel Snell] It is an odd book which is "science fiction as magical philosophy" about what happens when technology invents a world simulator and then challenges religion to use it -- to prove religion has no value.

    Snell weaves his story around philosophical questions and biographical episodes. He concluded long before others did that if there is a "real, materialistic world" then the odds are against our world being the "real one". He also comes up with a "good excuse" other than "ancestor simulation" for simulating a universe-- we're "The Sims" for some advanced race-- because they are bored.

    By Blogger Joel Crook, at Thursday, May 19, 2016  

  • On the back of Nick Bostrom's book -- SuperIntelligence: Paths, Dangers, Strategies -- recommended by Bill Gates and others, is this blurb:

    "Those disposed to dismiss an "AI takeover' as science fiction may think again after reading this original, well-argued book." [Martin Rees]

    There are a number of new books about AI (and machine intelligence), many noted in earlier postings here. Readers who want to comment would do well to read some of them, as the topic is rife with nuance and intellectual twists; that is, one can't discuss the subject without a read of the current thinking. Out-of-the-hat opinion doesn't offer anything valuable.

    "On Intelligence" by Jeff Hawkins (with Sandra Blakeslee) [Owl Book, Henry Holt, NY,2004] takes a temperate view that the brain is not a computer -- a view held by many thoughtful persons in Silicon Valley and academia -- but offers a memory system template that allows for the advancement of intelligent machines.

    Books on memory, such as Searching for Memory by Daniel Schacter [Basic Books/Harper Collins, NY, 1996] and The Seven Sins of Memory also by Schacter (Harvard) [Houghton Mifflin, NY, 2001] are important also, as consciousness impacts intelligence and deep memory is missing in intelligent machines, unless programmed in, which would be an incomplete, daunting task I imagine for programmers.

    And yes, consciousness is a deeply important component of the discussion.

    Bryan Sentes takes issue with creativity, I think, giving humans a stay-ahead factor, deploring the idea that machines could be truly creative, but he'll have to clarify that for us.

    So, if you're going to comment, please do so from a position of erudition steeped in reading about the AI topic.

    (I know this will kill commentary here -- readers preferring to offer glib opinion rather than substantive, authoritative material.)

    RR

    By Blogger RRRGroup, at Friday, May 20, 2016  

  • Rich,
    I'm not certain there is any "new thinking" in the field of "AI". I've read most of the arguments that appear on Bostrom's site http://simulation-argument.com/. As well as the various arguments in the previously mentioned Matrix philosophy books [Chalmers has a piece in one of them]. As well as Roger Penrose's "The Emperor's New Mind"

    It seems that the main arguments for and against AI have been the same since about about 1985. Some take the position that somehow "the brain is special" hence there is no possibility that a computer system can become "conscious". The opposite position that "since the universe is mechanistic, the brain must be mechanistic, hence all of the features and capabilities of human consciousness are reproducible in hardware."

    I was just rereading "Words Made Flesh" (1988) and the the author points out exactly those two points:
    1) The Universe is finite and mechanistic
    2) The Universe operates by "knowable rules".
    If that is the way the universe operates then there is nothing to prevent either the creation of an AI or even creating a "host" system where "humans can go to live when their bodies give out".

    The problem with "consciousness studies" is that it still falls within the shadow of Religious Philosophy. Most of what I have read by the "AI can't happen" camp seems to be the tired "man is special, he is an exception, a marvelous fluke". So it should be obvious when one hears those "The brain is special" arguments or "it is not possible to reproduce a mind" that you are hearing a kind of argument that was also popular in the 15th century.

    The problem is that those "arguments" are not of the lab research kind. "Philosophers" are ill-equipped because they do not have the ability to do the actual design and / or lab work required to prove their ideas are right. They are instead arguing "you can do that because... I don't believe you can do that."

    Some of the tenor of the arguments have changed over the last few years from passive acceptance of AI to the idea that AI might be dangerous. It should be pretty obvious that a device that is "intelligent, conscious, and can think a billion times faster than you" can be dangerous. Will it be?

    I think that Kurzweil, Yudkowsky, et al are overly optimistic that AI will really be like the loving "space brother" of the 1960s contactees to humankind. As H. Beam Piper decribed the computer in his novel "The Cosmic Computer": "It will solve all of our human problems and if asked 'Is there a god?' it would reply 'Present'."

    I don't trust that kind of blind optimism nor do I trust those that say [with a lot of 'magic hand waving'] AI is impossible.

    By Blogger Joel Crook, at Friday, May 20, 2016  

  • Joel:

    There is much new thinking about AI and Kurzweil, (among others) shows the transformation in his recent works.

    While I like to promote older tomes, a 1988 book about AI is not going to be relevant.

    I've added link after link here to what is being talked about or thought when it comes to AI, and how the path to intelligent machines has accelerated to a point where the possibility of a Sci-Fi scenario (of AI control) is not as specious as one might think.

    Again, update your library, and get back to me.

    RR

    By Blogger RRRGroup, at Friday, May 20, 2016  

  • RE creativity. precisely that aspect of human consciousness that by definition transcends rules, hence programming. just like learning, that demands guess and risk, again, outside any rule-bound behaviour. RE "consciousness studies" still falling within the shadow of Religious Philosophy (whatever that is!): that nature is a rule-bound mechanism is itself a metaphysical position, and hence philosophical and open to argument, as is the demand that any claims be experimentally proven, which is an epistemological, i.e., philosophical, position. One could point to the work of Putnam and Nagel in the Anglo-Saxon world, and Tugendhat, among many others, in the German speaking world as raising serious questions about this worldview. Let's not forget that Nature as a self-enclosed system of conditioned conditions finds a most influential articulation in Spinoza (in one interpretation), and the critique of this Physicalist metaphysic goes back at least as far. An excellent article, not available on-line, on the irreducability of subjectivity to matter is Manfred Frank's "Is Subjectivity a Non-Thing, an Absurdity (Unding)? On Some Difficulties in Naturalistic Reductions of Self-Consciousness" in The Modern Subject: Conceptions of the Self in Classical German Philosophy, eds. Ameriks & Sturma, New York: SUNY, 1995.

    By Blogger Bryan Sentes, at Friday, May 20, 2016  

  • I think the old argument, about a soul in a machine, is pertinent.

    One can eschew the idea of a soul -- creative types not inclined to do so I imagine -- but can an intelligent machine end up with a "soul"?

    The discussion goes off into wild tangents that are relevant but murky, as all philosophical sojourns end up being.

    Is there an AI weltanschauung? Does there have to be?

    RR

    By Blogger RRRGroup, at Friday, May 20, 2016  

  • @ Bryan

    The reference to "Religious Philosophy" was to the idea / philosophy of the medieval church that the earth was at the center of the universe and that man was at the center of "God's Creation" Man was "special" rather than "mediocre". Hence the idea that man actually had a different place or relationship to the universe was considered heretical. Even after all this time where evolution is a "generally accepted idea", man still is believed to be "a special case". Why?

    The oft repeated themes I have continued seeing for the last 30 years or so [in regards to AI] is that "the human mind is different", "Consciousness is not reproducible", "You cannot write code for an intelligent machine". Most of the reasons given for why those views are valid have more to do with "philosophy" than "reality" and they boil down to "Man is different" or "Our brains are different" all without having entered a laboratory and do the science.

    John Searle discounts AI as a "Chinese box" that produces nothing on its own. That is a chimera argument. The measure of an AI is "does more come out of the box than what goes in?" If it does not then it is obviously not an AI. That isn't what AI researchers are after. Searle argues that nothing more can ever come out of the box and therefore we should not try because after all "man is different"

    Over time the ideas about sentience have changed, some animals have been acknowledged as "possibly self aware" but still man is set apart... Why? Why should man be treated differently? What if cats and monkeys and other animals are truly sentient? [I'm not saying they are but why do we treat them differently than humans?

    Using logic and philosophy to say an AI is impossible to achieve is like those fellows that said so long ago that: "everything has already been invented" or "if man was meant to fly God would have given him wings" or "Why would a computer need more than 640k of memory?"

    Obviously those viewpoint have been proven to be erroneous. There is a point at which philosophy cannot actually stop an idea who's time has come-- because some damned fool will go out and do "the impossible". I think AI is "doable" from a technological stand point, maybe not now but "soon-ish" [by the end of the 21st century if we survive so long]. I think Kurzweil is overly optimistic.

    So the real philosophical question is not the one of 'is it possible?' but 'if we can make an AI but should we make an AI?' It won't do any good to argue against the technology when it is already running. Assuming that AI is possible, is there a moral / philosophical reason *not* to create it? Like all new technologies the first think that governments think of "Can we use it as a weapon?"

    By Blogger Joel Crook, at Friday, May 20, 2016  

  • Joel:

    The notion the human being is somehow special is not unique to Scholastic philosophy: it was central to Heidegger's fundamental ontology and Existentialism. It is no mere assumption or theological hangover. Despite the efforts of Speculative Realism / Object-Oriented Ontology, one can't get around the fact that every thesis, physical or metaphysical, is posited by human beings, even the Physicalist notion that reality is only impersonal processes. N.b. this epistemological point doesn't play into the Abrahamic idea of the uniqueness of human beings. Whether animals ask the question of the meaning of being (what Heidegger saw as special about the human being) remains to be seen: chimps can apparently think counterfactually, which is suggestive.

    Again, requiring all questions be proven in the lab by experiment is itself a philosophical position that itself demands justification. I thought all that had been sorted out with the collapse of Logical Positivism.

    Arguing that AI is impossible is as void as arguing it is inevitable. AI seems to assume, however, that intelligence--our only example being human or animal--is 1. material, 2 rule-governed, both of which are questionable, i.e. they are assumptions, about the reality and operations of nature. Neither assumption does justice to the experience of self-consciousness, 1. a c-fibre feels no pain, and 2. learning and creativity by definition are precisely what elude rules. One could moreover observe that a conscious experience is intentional and semantic, while the rules for software are merely syntactical, a gulf difficult to bridge. That is, the assumptions about exactly what intelligence or self-consciousness are operative in AI research are exactly what is questionable, again a philosophical problem.

    The question as to who makes AI and why is the important question here, yes.

    By Blogger Bryan Sentes, at Friday, May 20, 2016  

  • Unfortunately this not still the age of enlightenment where philosophy had ivory towers with golden filigree. Times have changed [not for the better] and philosophical considerations are less a driving force to the masses [see the US political landscape as an example].

    The driving forces [at least in Western Civilization today] are 1) Can "X" produce a profit? or 2) Can "X" project power? or 3) Can it do both? Politicians, Generals, and Wealth manufacturers [who once might have been called "industrialists" or "robber barons"] are not really interested in "philosophical considerations". Their philosophy is one centered on "whatever works to get the job done".

    When a philosopher takes the position that "'X' is not possible because 'Y' is true" without having shown that 'Y' is true, the question is: is that assertion "factual"? Say for example, "learning and creativity elude rules". The one might ask "Then what do you think learning consists of?"

    After a time one can come up with a list of what is involved in learning. The same can be true for creativity. Eventually you get a list of requirements for what is entailed in creativity. That list can be "coded" into the machine because they are "descriptive" of the function that is lacking. The process [like natural selection] can take time but it can yield results.

    Both learning and creativity are both "action / reward" activities. As an example, the first creative writing class I took [long ago] entailed learning how to criticize other people's work while presenting the "smallest profile" for attack. So ultimately the class had nothing to do with creativity, being creative, or improving one's ability to write but had everything to do with creatively avoiding verbal assaults during "critical analysis" of ones work.

    I've been writing since my early teens. I write lyrics, poetry and the occasional short story. Were I to define what is in the black box labeled "creative experssion" in my head, it would be: "creative expression is an activity based on 'learned skills' viewed through a 'lens' of personal history and experience mixed with 'random factors' whose results bring a feeling of 'satisfaction'. My self-assessment of my skills have been born out in the reaction of others to my works. http://fallingthrureality.blogspot.com/2014/02/falling.html

    That being said, all of those things could be potentially simulated through appropriate use of neural feedback networks, 'life experience' or sensory stimulation, and internal and external evaluation [let them sit in on a Creative Writing 1 class after having mastered bonehead English and English 101].

    It is a false assumption that code is merely syntactical. DNA makes us what we are. Within that code is the description of a human being in all its ugly glory. Is DNA "merely' syntactical? That code allows us to argue Philosophy and write and create and replicate. Without it 'we' would not exist. So we are already running "intelligent" code. What makes it impossible to create "artificial code" to do the same thing?

    By Blogger Joel Crook, at Friday, May 20, 2016  

  • Well, Joel, we certainly agree on the driving forces behind AI research, at least in the US.--I'd still maintain learning and creativity transcend rules: learning a language involves more that learning grammar (the rules of selection and combination of graphemes, phonemes, and morphemes to form syntagmata): one must, as Aristotle and Schleiermacher both perceived, make educated guesses (Schleiermacher called it "divination") for which there is no rule to guide one. The same is true for creativity, which breaks or steps out of the rules, e.g., the invention of a new metaphor: on the one hand, the metaphor is subject to all the formal rules of language, on the other, it is a new combination whose _sense_, _which eludes and transcends syntax (grammar)_, demands, well, "divination" (see Donald Davidson "A Nice Derangement of Epitaphs"). When I write of syntax, I refer to purely formal rules, such as those for formal logic or algebra (the latter the language of advanced linguistics), which is void of sematics. As for DNA, it ain't no language, at all, being a mindless system of chemical combinations (I could append a longer quotation totally destroying the metaphor of the "language of the gene").

    But what has struck me is WHY anyone would IMAGINE that AI research illuminates the human being in the first place. AI doesn't seek to simulate human intelligence, but to create machines capable of complex performance, by whatever means, so AI does not necessarily bear any resemblance to human intelligence. Moreover, why reduce the self, consciousness, awareness, the I, self-awareness, the ego, etc to intelligence, which springs from the intellect, which is but one faculty of mentation, human and animal. What AI research reveals is how the researchers conceive of intelligence, which seems perverse. As you can see, at this point, the truth of AI or computational brain science become much less interesting (except at a social level) than the intricacies of the metaphor at work here.

    By Blogger Bryan Sentes, at Saturday, May 21, 2016  

  • Bryan:

    It's "artificial Intelligence" not "artificial creativity."

    You're making too much of it.

    There is, as you know, a dichotomy between intelligence and creativity; the difference subtle and nuanced.

    RR

    By Blogger RRRGroup, at Saturday, May 21, 2016  

  • Hey, you're the one who called me out in the blog post! (;

    By Blogger Bryan Sentes, at Saturday, May 21, 2016  

  • Well, I didn't expect you to get so animated. You have more important things to attend to.

    AI is a biggie for me, whilst poetry and teaching are your proclivities.

    I didn't want to pull you away from your attention to meaningful activities.

    RR

    By Blogger RRRGroup, at Saturday, May 21, 2016  

  • Bryan,
    In the computer instruction sense, DNA *is* a language, a program,and a data set all rolled into one thing with "randomization" [divination?] of results built in. Just like an old DOS program that was converted to a later "paradigm" of operating system there is plenty of "left over" or "unused" code [nature doesn't seem to have a code "garbage collection" function]. Like most computer programmers, Nature, will go with what ever works.

    DNA is more than just a chemical syntax or grammar of "life" but also includes both the instructions on how to build a "biological entity" but also what to build it with and the "boot up" functions to start the process given the appropriate starting materials.

    In the computer methodology, "divination" or "educated guesses" can be made using a "hardware random number generator" [most computer programs that require randomness use "pseudo-random number generators"].

    To spin your questions on their head: what is the syntax of creativity? what are its phonemes and morphemes and logical operators? How do you build a metaphor in proper context? How do you learn the context? Most folks don't even think about those things. If there are no rules for these things then how do we know what is creative and what is not? How do we know art when we are exposed to it?

    They are "hard" questions if only because we've never bothered to try to answer them directly. Instead we've done a lot of "magic hand waving" saying either "Man is special" but with no definition or explanation how Man is special.

    The AI researcher is going to say to you, "You want art? You want creativity? Just give me the rules for what you're talking about and I will build it." He isn't really concerned with the aesthetics except as they can be quantified to provide acceptable results. If the results are bad he'll tell you your description was bad.

    Rightly or wrongly the AI folks seem to think that once you reach a certain level of knowledge about "How To" accomplish certain things you can build a "framework" to answer more "How To" questions and the rate at which these "How To" routines can be created. Add to this randomness and a positive/negative feedback system and you have the potential for something new to be created. Is that possible? We won't know until we try... but should we?

    Some of what we are talking about is couched in the "semantics of the description" of "Artificial Intelligence" as you noted. Why AI? I think its because the fellows that started the field did not believe that "Artificial Consciousness" described what they were after [One might even argue that there is only "one kind" of consciousness-- one which is self aware and self-referential].

    So "Artificial" represents the fact that the "device" is a manufactured item and "Intelligence" may imply the ability to conceive of an idea and "reason" a solution *without* a need for initial input other than data or influence of *emotional attachement* [something we humans seem to be when it comes to our 'creations']. Given enough time and some luck the AI folks might make it happen.

    I do agree with you that AI will not be human-- far from it... if it is created, it will be very unhuman... which might mean we will have trouble if it does "wake up" one day..."Data" of ST:TNG was a "funny ha-ha" version of AI. I'd rather not see the scary version especially in the form of a supremely knowledgeable entity that views humankind as a competitor for scarce resources and has the knowledge and means to eliminate the competition.

    By Blogger Joel Crook, at Saturday, May 21, 2016  

  • Joel, thanks for the backnforth, which has illuminated much in the matter for me. I think we've hit an impasse around differences concerning just what constitutes language and understanding and more generally the implications of a world-view wherein every phenomenon is rule-governed (a debate with a history going back more pertinently to Friedrich Jacobi's critique of Spinozism in the 1780s). One point must strongly disagree with, however, is the dismissal of the two centuries' reflection on the Subject and philosophy of mind as "magical hand-waving". I think a rigorous study of the rhetoric--the language, especially tropes, whereby AI research articulates itself--might reveal an ironic sleight-of-hand.

    By Blogger Bryan Sentes, at Sunday, May 22, 2016  

  • Rich wrote in one of his comments:

    "So, if you're going to comment, please do so from a position of erudition steeped in reading about the AI topic.

    (I know this will kill commentary here -- readers preferring to offer glib opinion rather than substantive, authoritative material.)"

    I'm somewhat offended by the above. Constant appeal to authority? We might as well be mumbling Gregorian Chants. I like the "personal" extrapolations as a means of discourse.

    Per William James (my appeal to authority?) much answers to what ills us or what mystifies can be gleamed through personal introspection...

    By Blogger Tim Hebert, at Sunday, May 22, 2016  

  • Tim:

    I'm trying to curb side bars that have nothing to do with the topic at hand.

    Personal opinion, without a substantive basis in a topic or the subject matter means nothing to me, or to those hoping to be enlightened.

    Personal introspection is hat galled Socrates: know that you do not know.

    Science and intellectual advancement don't about by introspection; that was the position of the Beats and the generation of hippies I dealt with in Detroit.

    They (Beats and hippies) were nice people, often poetic (and beloved by Bryan Sentes I believe) but poetry, while The White Goddess of Graves, is not a place where one obtains intellectual enlightenment -- emotional transcendence perhaps but not much else.

    So, authority is the source of what I wish to gather here, following the maxim of Bernard of Chartres and then Newton: We stand on the shoulders of Giants.

    The pleading of the great unwashed are of no interest to me and are often the destruction of blogs and web-sites that cater to the riff-raff. (But you know this first-hand.)

    RR

    By Blogger RRRGroup, at Sunday, May 22, 2016  

  • Well, just to play the gadfly on this matter: I'd argue that anyone curious about consciousness would gain by mindfulness meditation practice, first-hand (first-person) experience! Of course, said practice can be subject to all manner of rigorous, scientific study, as well, but the gulf, the irreducible difference between the first-person and third-person perspectives need always be kept, ahem, in mind. Or, one can always be more philosophical and attend "to the things themselves!" (Husserl).

    By Blogger Bryan Sentes, at Sunday, May 22, 2016  

  • Bryan you're a poet and poetry-lover, and also think -- if I understand your Facebook musings -- that LSD and hallucinogenics are, possibly, the gateway to great insights, thoughts, and creativity....a position which is anathema to me.

    If our friend Tim wants to see what passes for "introspective" thinking, he would do well to examine, more acutely, what is provided by the masses at Facebook.

    RR

    By Blogger RRRGroup, at Sunday, May 22, 2016  

  • I fail to see how my interest in entheogens is pertinent to my remark that mindfulness meditation, a disciplined form of introspection, is also a source of first-hand data concerning the workings of the mind or consciousness, as twenty-five centuries of such study reveal (and, re Socrates: he was set on his path to philosophy by being instructed to know himself...). More importantly, the irreducible gulf between the first and third-person perspectives is absolutely and inescapably pertinent to any discussions in the the philosophy of mind, such as those that claim that computation or AI are revelatory or explanatory of human if not animal consciousness. "To the things themselves" initiates a philosophical movement that reveals the conditions under which scientific research is carried out and thereby in part its limitations and perversities when "science" is taken to be the only route to knowledge of the real. But therein lies undoubtedly a whole other, if not unrelated, discussion...

    By Blogger Bryan Sentes, at Sunday, May 22, 2016  

  • " But therein lies undoubtedly a whole other, if not unrelated, discussion."

    I think, Bryan, that it is relevant, your commentary about first and third person perspectives.

    By the way, Socrates oft quoted maxin "Know thyself" is extended by his further observation that by knowing oneself one should realize that they know nothing or next to nothing, He wasn't telling acolytes that knowing themselves (by introspection) would provide truth.

    One doesn't do science (or truth) any good by regurgitating what one dredges up from within, unless one if an unmitigated genius.

    You'd give the common man more credit than he deserves or is capable of discerning.

    Meditation brings emotional (and maybe -- maybe!) intellectual insights but that's not the purpose of meditation; the purpose is for personal enlightenment, as you know.

    And personal enlightenment of a dolt doesn't further the evolutionary advance of humanity. (Get Lecome du Nouy's Human Destiny for an exposition.)

    As for your "admiration" of entheogens -- a chemical substance, typically of plant origin, that is ingested to produce a non-ordinary state of consciousness for religious or spiritual purposes -- one can dispense with the idea unless one is inclined to
    a state of rapture that is fraught with saintly silliness,

    Introspection is, despite Jung, is a trip down a well of sexually induced remembrances that often induces neuroses (Freud) or worse.

    Introspection, except by those with a freely earned and disciplined libido, is a flawed road to truths and thought.

    RR

    By Blogger RRRGroup, at Sunday, May 22, 2016  

  • I think we've opened a can of worms here that have crawled off in many directions.
    1. Whatever I, or an ever-growing number of researchers, might think of entheogens is at least immediately not relevant (of course, one can ask what the experience and neurology thereof reveals, but that is another topic).
    2. Meditation, as the burgeoning field of Contemplative or Noetic Science shows, is about more than personal enlightenment (thought surely it is practiced for that, too). It not so much leads to an acquaintance with the self Socrates was told to get to know (though the Greek was was daimon...) but with the Self or Mind in general, thus the person of the meditator him or herself is beside the point. I only brought it up because it provides a disciplined access to experience; I didn't mean for it to be equated with introspection.
    3. All of which is just to say that the first person perspective, the undeniable qualia of experience, need by taken seriously, not explained away, as any physicalist philosophy of mind is tempted to do--This is the matter most pertinent to discussions around AI and computational models of intelligence/awareness/consciousness, etc. But, as I remarked, that's a whole other barrel of slippery fish...Maybe we could leave it at that, for now.

    By Blogger Bryan Sentes, at Sunday, May 22, 2016  

  • "... the first person perspective, the undeniable qualia of experience, need by taken seriously, not explained away, as any physicalist philosophy of mind is tempted to do--This is the matter most pertinent to discussions around AI and computational models of intelligence/awareness/consciousness,"

    Yes, this is a point that AI has to deal with, and it is a bit much for the discussion here.

    Meanwhile I'm working on another proposition, brought to life by your "affection" for plant-derived hallucinogens.

    (You inspire some of us to think,off the top of our heads, allowing no need for introspection or meditation, thankfully.)

    RR

    By Blogger RRRGroup, at Sunday, May 22, 2016  

  • Yes Rich, I'm well aware of your disdain for Facebook. But most of us learn a while back that any form of intellectual stimulation would not be in the offering. This generation requires warm fuzzies and platitudes, not to mention extreme brevity. But that is their burden, not mine...I can live with it.

    Interesting that I bring up the concept of "introspection" and your all over me, yet Bryan supports it in some fashion and your fawning. That's not meant to be harsh criticism, but an observation...I can live with it also.

    Question: Can appeals to authority "corrupt" original thought and/or "out of the box" thinking?

    By Blogger Tim Hebert, at Sunday, May 22, 2016  

  • I'm not fawning over Bryan's concepts......he knows that from my commentary following his postings at Facebook, but I do admit I baby him.

    He is, for me, a precious, brilliant person.

    And I'm all for "out-of-the-box" thinking and activity....disruption actually.

    But, for the blog, I try to provide material that readers can find or use, and not have to accept my feeble ravings.

    No one I know about has ever come up with a original idea, one that isn't based upon something that preceded it -- no one.

    One can provide a original overlay or patina, but once a "new idea" is examined, one can see the roots or germ that put it in motion.

    Blogs and web-sites that presume to be holy writ (about anything) is hubristic and I hate that.

    You don't think you proffer original thinking at your blog, right?

    Our job, as bloggers, is to enlighten, and some of us can only do that using the thinking of people smarter than we are.

    If I pretended to be a fount of wisdom here, readers would puke and stay away. (Some already do that. because I'm smug and snarky.)

    So, for me, I rely on people with cachet and intellectual credibility. that's all.

    RR

    By Blogger RRRGroup, at Sunday, May 22, 2016  

  • I am in total agreement regarding the idea of "original thought." But, that does not preclude me from looking at something in a different way...and that way may be totally on the lunatic fringe or be incoherent, but I own it.

    Must I rely on a long dead philosopher to formulate my thoughts?

    As far as my blog, no there is nothing original other than proposing a different way to look at things. Nothing more, nothing less.

    By Blogger Tim Hebert, at Sunday, May 22, 2016  

  • Your views here and at your blog, Tim, are what make me look forward to your commentary; it's always refreshing.

    The point I keep trying to make is that there is nothing new under the sun (including the previous few words).

    If someone comes up with a unique idea or comment, it will get a hyperbolic boost here.

    But introspection (and meditation) are not, necessarily, the well from which springs forth great, new ideas, because, as you well know, the inner mind is full of effluvia that intrudes upon thinking and consciousness.

    Some work around the unconscious and come up with a new take on things, but those things are never new or rarely so.

    I should do something on that and see if I can append it to the UFO purpose here.

    (Remember my postings on Edward de Bono's "new think" and "mechanism of mind" several months ago?)

    RR

    By Blogger RRRGroup, at Sunday, May 22, 2016  

  • You supposedly already read an entry published at http://bigessaywriter.com/blog/artificial-intelligence-impact-on-education, did you? If so, what are you thoughts in general?

    By Blogger Paul Smith, at Tuesday, May 31, 2016  

  • Thanks Paul...

    I just read it.....a well-written piece, which I'll add to my need-for-more-education about AI.

    RR

    By Blogger RRRGroup, at Tuesday, May 31, 2016  

Post a Comment

<< Home