![]() | This ![]() It is of interest to the following WikiProjects: | |||||||||||||||||||||||||||||||||||||||||||||
|
![]() | This article links to one or more target anchors that no longer exist.
Please help fix the broken anchors. You can remove this template after fixing the problems. |
Reporting errors |
Personally I have a bone to pick with this intro: " Can a machine have a mind, mental states and consciousness in the same sense humans do? Can it feel? " Okay, maybe I'm not a professional philosopher, but shouldn't we say "in the same sense that sentient animals do" instead of just "in the same sense humans do"? Obviously any sentient animal has a consciousness, it's just that some of them have much less sophisticated ones that's all. Children of the dragon ( talk) 23:54, 24 April 2010 (UTC)
A discussion that appeared here about the ethics of artificial intelligence has been moved to the talk page of that article.
The article might benefit fromm a discussion of Maudlin's "olympia" argument. 1Z 00:32, 12 January 2007 (UTC)
This article should contain more discussion of the serious academic debates about the possibility/impossibitity of artificial intelligence, including such critics as John Lucas, Hubert Dreyfus, Joseph Weizenbaum and Terry Winograd, and such defenders as Daniel Dennett, Marvin Minsky, Hans Moravec and Ray Kurzweil. John Searle is the only person of this caliber who is discussed.
In my view, issues derived from science fiction are far less important than these. Perhaps they should be discussed on a page about artificial intelligence in science fiction. Is there such a page? CharlesGillingham 11:02, 26 June 2007 (UTC)
I cc'd this over from the Talk:Turing machine page:
> Turing's paper that prescribes his Turing Test:
"Can machines think?" Turing asks. In §6 he discusses 9 objections, then in his §7 admits he has " no convincing arguments of a positive nature to support my views." He supposes that an introduction of randomness in a learning machine. His "Contrary Views on the Main Question":
re Marvin Minsky: I was reading the above comment describing him as a supporter of AI, which I was unaware of. (The ones I do know about are Dennett and his zombies -- of "we are all zombies" fame, and Searle. Then I am reading Minsky's 1967 and I see this:
I have so many wiki-projects going that I shouldn't undertake anything here. I'll add stuff here as I run into it. (My interest is "consciousness" as opposed to "AI" which I think is a separable topic.) But on the other hand, I have something going on at the List of open problems in computer science article (see the talk page) -- I'd like to enter "Artificial Intelligence" into the article ... any help there would be appreciated. wvbailey Wvbailey 02:28, 2 October 2007 (UTC)
Here it is, as far as I got:
Source:
In the article "Prospects for Mathematical Logic in the Twenty-First Century", Sam Buss suggests a "three-fold view of proof theory" (his Table 1, p. 9) that includes in column 1, "Constructive analysis of second-order and stronger theories", in column 2, "Central problem is P vs. NP and related questions", and in column 3, "Central problem is the "AI" problem of developing "true" artificial intelligence" (Buss, Kechris, Pillay, Shore 2000:4).
He goes on to mention the use of neural nets (i.e. analog-like computation that seems to not use logic -- I don't agree with him here: logic is used in the simulations of neural nets -- but that's the point -- this stuff is open). Morever, can I am not sure that Buss eliminate "consciousness" from the discussion? Or is consciousness a necessary ingredient for an AI?
Description:
Mary Shelley's Frankenstein and some of the stories of Edgar Allan Poe (e.g. The Tell-Tale Heart) opened the question. Also Lady Lovelace [??] Since the 1950's the use of the Turing Test has been a measure of success or failure of a purported AI. But is this a fair test? [quote here?] (Turing, Alan, 1950, Computing Machinery and Intelligence, Mind, 59, 433-460. http://www.loebner.net/Prizef/TuringArticle.html
A problem statement requires both a definition of "intelligence" and a decision as whether it is necessary to, or if so how much to, fold "consciousness" into the debate.
> Philosphers of Mind call an intelligence without a mind is a zombie (cf Dennett, Daniel 1991, Consciousness Explained, Little, Brown and Company, Boston, ISBN 0-316-180066 Parameter error in {{ ISBN}}: checksum-1 (pb) ):
Can an artificial, mindless zombie be truly an AI? No says Searle:
Yes says Dennett:
> Gandy 1980 throws around the word "free will". For him it seems an undefined concept, interpreted by some (Sieg?) to mean something on the order of "Randomness put to work in an effectively-infinite computational evironment" as opposed to "deterministic" or "nondeterministic" both in a finite computational environment (e.g. a computer).
>Godel's quote: "...the term "finite proceedure" ... is understood to mean "mechanical procedure". concept of a formal system whose essence it is that reasoning is completely replaced by mechanical operations on formulas" ... [but] the reults mentioned in this postscript do not establish any bounds for the powers of human reason, but rather for the potentialities of pure formalism in mathematics."(Godel 1964 in Undecidable:72)
Importance:
> AC (artificial consciousness, an AI with a feeling mind) would no less than an upheavel in human affairs
> AI as helper or scourage or both (robot warriors)
> Philosophy: nature of "man", "man versus machine", how would man's world change with AI's (robots)? Will it be good or an evil act to create a conscious AI? What will it be like to be an AI? (cf Nagel, Thomas 1974, What Is It Like to be a Bat? from Philosophical Review 83:435-50. Reprinted on p. 219ff in Chalmers, David J. 2002, Philosophy of Mind: Classsical and Contemporary Readings, Oxford University Press, New York ISBN 0-19-514581-X.)
> Law: If conscious, does the AI have rights? What would be those rights?
Current Conjecture:
An AI is feasible/possible and will appear within this century.
This outline is just throwing down stuff. Suggestions are welcome. wvbailey Wvbailey 16:13, 6 September 2007 (UTC)
cc'd from Talk:List of open problems in computer science. wvbailey Wvbailey 02:41, 2 October 2007 (UTC)
Some years ago (late 1990's?) I attended a lecture given by Dennett at Dartmouth. I was hoping for a lecture re "consciousness" but got one re "the role of randomness" in creative thought (i.e. mathematical proofs, for instance). I know that Dennett wrote something in a book re this issue (he was testing his arguments in his lecture) -- he talked about "skyhooks" that lift a mind up by its bootstraps -- but I haven't read the book (I'm not crazy about Dennett), just seen this analogy recently in some paper or other.
In your article you may want to consider "forking" "the problem" into sub-problems. And try to carefully define the words (or suggest that even the definitions and boundaries are blurred).
I've spent a fair amount of time studying consciousness C. My conclusion is this --
Relative to "AI" this is not off-topic, although some naifs may think so. Proof: Given you accept the premise "Consciousness is sufficient for an AI", when an "artifical consciousness" is in place, then the "intelligence" part is assured.
In other words, "diminished minds" that are not C but are highly "intelligent" are possible (expert systems come to mind, or machines with a ton of sensors that monitor their own motions -- Japanese robots, cars that self-navigate in the desert-test). There may be an entire scale of "intelligences" from thermostats (there's actually a chapter in a book titled "what's it like to be a thermostat?") up to robot cars that are not C. In these cases, I see no moral issues. But suppose we accidentally create a C or are even now creating Cs and don't know it, or cruelly creating C's for the shear sadistic pleasure of it (AI: "Please please I beg you, don't turn me off!" Click Us: "Ooh that was fun, let's do it again..." )-- that where the moral issues lurk. Where I arrived in my studies, (finally, after what, 5 years?) is that the problem of consciousness revolves around an explanation for the ontological (i.e. experienced from the inside-out) nature of "being" and "experiencing" (e.g knowing what it's like to experience the gradations of Life-Savor candy-flavors) -- what it's like to be a bat, what it's like to be an AI. Is it like anything at all? All that we as mathematicians, scientists and philosphers know for certain about the experiential component of being/existence is this: We know "what it's like" to be human (we are they). We suspect primates and some of our pets -- dogs and cats -- are conscious to a degree, but we don't have the foggiest sense of what it is like to be they.
Another way of stating the question: Is it possible for an AI zombie to go through its motions and still be an effective AI? Or does it need a degree of consciousness (and what do the words "degree of consciousness" mean)?
If anyone wants a bibliography on "mind" lemme know, I could assemble one here. The following is a really excellent collection of original-source papers (bear in mind that these are slanted toward C, not AI). The book cost me $45, but is worth every penny:
Bill Wvbailey 15:24, 10 October 2007 (UTC)
Woops: circular-definition alert: The link intelligence says that it is a property of mind. I disagree, so does my dictionary. "IF (consciousness ≡ mind) THEN intelligence", i.e. "intelligence" is a property or a by-product of "consciousness ≡ mind". Given this implication, "mind ≡ consciousness" forces the outcome. We have no opportunity to study "intelligence" without the bogeyman of "consciousness ≡ mind" looking over our shoulder. Ick...
So I pull my trusty Merriam-Webster's 9th Collegiate dictionary and read: "intelligence: (1) The ability to learn or understand or deal with new and trying situations". I look at "Intelligent; fr L intellegere to understand, fr. inter + legere to gather, to select."
There's nothing here at all about consciousness.
When I first delved into the notion of "consciousness" I began with an etymological tree with "conscious" at the top. The first production, you might say, was "aware" from "wary", as in "observant but cautious." Since then, I've never been quite able to disabuse myself of the notion that that is the key element in, if not "consciousness", then "intelligence" -- i.e. focused attention. Indeed, above, you say the same thing, exactly. Example: I can build a state machine out of real discrete parts (done it a number of times, in fact) using digital and analog input-selectors driving a state machine, so that the machine can "turn its attention toward" various aspects of what it is testing or monitoring. I've done this also with micros, and with spread-sheet modelling. I would argue that such machines are "aware" in the sense of "focused", "gathering", "selecting". Therefore (ergo the dictionary's and my definition) the state machines have rudimentary intelligence. Period. The cars in the desert auto-race, and the Mars robots, are quite intelligent, ( iff they are behaving autonomously (all are criteria: "selective, focused attention attendere'" (fr. L. holding), "autonomous" and "behavior")).
"Awareness" versus "consciousness": My son the evolutionary biologist believes consciousness just "happens" when the right stuff is in place. I am agnostic. On one day I agree with Dennett that we're zombies, the next day I agree with Searle that something special about wet grey goo causes consciousness, the next day I agree with my son, the 4th day I agree with none of the above. I share your frustration with Searle, Searle just says no, no, no, but never produces a firm suggestion. But it was only after a very careful read of Dennett that I found his "zombie" assertion in "Consciousness Explained".
'Self-awareness: What the intelligent machines I defined above lack is self-areness. Does it make sense to have a state machine monitor itself to "know" that it has changed state? Or know that it knows that it is aware? Or is C a kind of damped reverberation of "knowing that it knows that it knows", with "a mystery" producing the "consciousness", as experienced by its owner? Does garlic taste diffent than lemon because if they tasted the same we could not discriminate them? There we go again: distinguishing -- di- stinguere as I recall - to pick apart. Unlike the Terminator, we don't have little digital readouts behind our eyeballs.
To summarize: you're on the right track but I suggest that the definitions that you're working from -- your premises in effect -- have to be (i) clearly stated, not linked to flawed definitions, but rather stated explicitly in the article and derived from quoted sources, and (ii) effective (i.e. successful, good, agreeable) for your presentation. Bill Wvbailey 22:09, 10 October 2007 (UTC)
-- I removed the reference to 'patterns of neurons' from this section, as only a small section of the philosophical community (materialists) would be happy to state categorically that thoughts are patterns of neurons. 10/9/10
http://www.msnbc.msn.com/id/21271545/
Bill Wvbailey 16:44, 13 October 2007 (UTC)
I've added some information on how Searle is using the word "consciousness" and a one paragraph introduction to the idea. I've also added a section raising the issue of whether consciousness is even necessary for general intelligence. I think these issues must be discussed here, rather than anywhere else.
The article is now too long, so I plan to move some of this material out of here and into Chinese Room, Turing Test, physical symbol system hypothesis and so on. ---- CharlesGillingham 18:49, 27 October 2007 (UTC)
'By this definition, even a thermostat has a rudimentary intelligence.'
That should be changed. It is referring to a quote that states an agent acts based on past experience. Thermostats do not. —Preceding unsigned comment added by 192.88.212.43 ( talk) 20:08, 8 July 2008 (UTC)
I added a reference, changed the wording to add "and consciousness". There's a philosophy of mind called panpsychism which attributes "mind" (aka consciousness, conscious experience) to just about anything you can image (even rocks)(I can't find my McGinn, but he's another panpsychic). Recently I've run into another book espousing what is essentially the same thing -- Neutral monism -- the idea that "allows for the reality of both the physical and mental worlds" (Gluck 2007:7), but at the same time "reality is one substance" (Gluck 2007:12):
Gluck's work is derivative from Bertrand Russell 1921. And Russell is derivative from William James and some "American Realists". In particular, the word "neutral" in "neutral monism" derives from Perry and Holt (see quote below); Russell in turn quotes William James's essay "Does 'consciousness' exist?":
And Russell observes that:
Russell goes on to agree with James:
To sum it up, neutral monists and panpsychists regard the universe is very mysterious, and they take the question seriously re whether or not thermostats have a rudimentary "intelligence" and/or "consciousness". Bill Wvbailey ( talk) 17:18, 24 August 2009 (UTC)
From the article:
This is baffling. For one thing, using the dictionary definition for "assert", Lucas can obviously assert anything he pleases. Whatever could the intended meaning be here? Besides, for there to be "the same limits", shouldn't the word be "prove" as in the first sentence, not "assert"? Also, what is the standard of proof for the AI program, and what is it for Lucas? It seems to me that the standard is "please convince yourself of the fact", unless there is some external formal system (such as ZFC) in which the proof is to be given. Is it tacitly assumed that the program is consistent, but Lucas may not be? (humans usually are not!) In any case, there's a lot of room for clarification here. -- Coffee2theorems ( talk) 19:14, 19 November 2011 (UTC)
Please see http://feelingcomputers.blogspot.com InsectIntelligence ( talk) 08:16, 3 June 2012 (UTC)
The link is pointing to a blog. Which article on the blog specifically do you mean? LS ( talk) 08:26, 19 July 2019 (UTC)
This entire section of the article seems to be derived from a single person's opinions. I'm new, so I hesitate to simply delete that portion, but It bothers me in it's paranoid incorrectness. ---- Zale12 ( talk) 22:21, 12 November 2012 (UTC)
Since when did philosophy mean expressing my opinion on a topic i cannot be bothered learning about. I cannot see any philosophy in this article. ---- Alnpete ( talk) 13:04, 1 December 2012 (UTC)
Perhaps this would be useful: https://mitpress.mit.edu/books/soar-cognitive-architecture - "The current version of Soar features major extensions, adding reinforcement learning, semantic memory, episodic memory, mental imagery, and an appraisal-based model of emotion." The relevant chapter goes in depth. I'm not comfortable editting the article, but it's a source that seems relevant, and contrary to the very dubious and sourceless paragraphs currently there. If desired, I could attempt an edit based on this source, but I don't want to. I'll probably mess things up. At the very least, that chapter's sources might touch on philosophy. ---- — Preceding unsigned comment added by 67.194.194.127 ( talk) 15:41, 16 August 2013 (UTC)
This article discusses whether it's possible that some day, a computer will be able to do any task that a human would be able to do and even exceed a human brain. There's no need to all that very complex analysis about how computers and brains work to determine if it will ever be possible to replace people with computers for all tasks we wish to do that for. Rather, it can very simply be proven mathematically that the answer is no. For instance, Cantor's diagonal argument gives a method of defining for every sequence of real numbers, a real number that does not appear in the sequence, but we can think of a way to count all computable numbers. Thus we can use Cantor's diagonal argument to think of and write down a non-computable number in terms of that sequence of all computable numbers provided that we have an infinite amount of memory. The way Cantor's diagonal argument works is, take the first digit after the decimal in base 2 and use the opposite digit as the first decimal digit of the new number, take the opposite of the second decimal digit of the second number and use it as the second decimal digit of the new number and so on. A computer, on the other hand with only a finite list of instructions and an infinite amount of resources can't generate such a number given by a decimal fraction in base 2 because it's not computable. Computers are better than us for certain tasks like fast mathematical computations of large numbers. Maybe some day, the very opposite of what was discussed will happen, that a human will be able to do everything a computer can do by using fancy pants shortcuts for proofs. For instance, if a computer was asked whether the second smallest factor of (2^(2^14))+1 ≡ 1 or 3 mod 4, it would solve it by doing a lot of fast computations, but a human on the other hand could solve it instantly by using Fermat's little theorem to prove that the second smallest factor ≡ 1 mod 2^15. Blackbombchu ( talk) 23:18, 31 July 2013 (UTC)
I've corrected the statement about quantum mechanical processes. His argument is that there must be new physics going beyond ordinary quantum mechanics because that's the only way to get non computable processes, some new physics that we don't yet have - of course he is also a physicist himself working on ideas for Quantum Gravity.
The rebuff of his arguments that follows is extremely poor. Penrose and Lucas's arguments are not countered by such simple statements as this. It is a long and complex debate, which I think would be rather hard to summarize in a short space like this. I'm not sure what to do about it though. It would be a major task to summarize the argument in detail, which now spans many books and papers. And a short summary would do it an injustice. There are many papers that claim to disprove his arguments. But in philosophy, when you have a paper by one philosopher which disproves the arguments of another philosopher - this is part of an on going debate, and it doesn't mean that these arguments are disproved in some absolute sense - unless their opponent concedes defeat. So one shouldn't present these counter arguments as if they are in some absolute sense the generally accepted truth, as this passage does - it presents it as if Penrose's ideas have been disproved.
Obviously Penrose for one does not accept the counter arguments, and has his own counter counter arguments, in his later books such as the Shadows of the Mind, and so the debate continues. His is a minority view on ai, sure, but in philosophy you get many interesting minority views. E.g. Direct Realism to take an example, is only about one well known, famous, mainstream philosopher who holds to it, Hilary Putnam but that is enough for it to be a respected philosophical view. Robert Walker ( talk) 12:37, 16 October 2014 (UTC)
I've separated it out into a new section "Counter arguments to Lucas, and Penrose" to make it clear that these are arguments that are not accepted by Lucas and Penrose themselves and are not objective arguments that everyone has to accept. This section however is very poor, as I said.
There are much better arguments against their position, also of course counter arguments in support of their position. Penrose and Lucas argued that humans can see the truth of Godel's sentence, not of the Liar paradox statement. So it is hardly a knock down counter example to point out that they can't assert paradoxical statements. Robert Walker ( talk) 16:11, 16 October 2014 (UTC)
I've added a link to Beyond the doubting of a shadow to the head of the section and short intro sentence to make it clear that there are many more arguments not summarized here, along with replies to those arguments as well. Someone could make a go at summarizing those arguments and counter arguments perhaps. Robert Walker ( talk) 16:22, 16 October 2014 (UTC)
Nn9888 ( talk) 13:21, 14 July 2015 (UTC)Will it happen?
Many well-known theorists in this topic have devoted their entire careers to their belief that the answer is yes.
My gut feeling tells me that in order to do so, artificial intelligence would need to be able to deceive the human race, just as humans deceive each other.
My gut feeling tells me that this would require that the artificial intelligence be conscious, i.e. self-aware.
The research community is nowhere near designing a machine that operates without any help from the human race.
We supply it with the electrical power to do everything it does.
My gut feeling is that in order for an artificial intelligence to subordinate the human race, it would have to antagonize the human race.
My gut feeling is that antagonizing the human race would require consciousness, i.e. self-awareness.
The research community is nowhere near designing a machine that is conscious, i.e. self-aware.
Artificially intelligent entities require instructions from the human race. They completely depend on these instructions. Without these instructions, they can't operate or carry out the next instruction.
These well-known theorists I refer to believe that artificially intelligent entities will, in a matter of decades, be able to, on their own, creatively write computer programs that are equally as creative as the artificial intelligence that created them, thus triggering an intelligence explosion.
Again, my gut feeling is that this would require consciousness, i.e. self-awareness, and the research community is nowhere near designing a machine that is conscious, i.e. self-aware.
These well-known theorists are trying to create so called friendly artificial intelligence.
How can it be friendly if it's not conscious, i.e self aware?
The first question in this wiki article "Philosophy of Artificial Intelligence" asks, "Can a machine act intelligently?"
Machines don't act at all. They don't do anything. The humans and the electrical power do all the doing. The humans wrote the instructions, built it, supplied it with power, and input the command to initiate the instructions.
For a machine to act either intelligently or non-intelligently, and for it to do anything, it has to be conscious, i.e self-aware; otherwise, it's not initiating any action on it's own. Why? Because it doesn't have a will of its own -- it doesn't have a will period. Its instructions come from humans.
Of course, language fails us here, because I just stated that the electrical power, unlike the machine, can do something, yet it isn't conscious; thus contradicting my own assertion of what "doing" entails, forcing us to sit and try to define the words 'do', 'act', 'intelligent', 'conscious', and 'self-aware' somehow without being anthropocentric.
Anyways, these theorists are betting that artificial intelligence will become unfriendly.
Can it become unfriendly without being conscious?
A good number of these theorists suspect that strong artificial intelligence will arrive in a matter of decades.
How is this going to happen if the human race hasn't the slightest idea how to make machines conscious?
And as for "intelligent"...
"Intelligent" according to who?
Humans?
Who is more intelligent when it comes to spinning a web, entrapping a bug, and enmeshing it in web -- a spider, or a human?
Who is more intelligent when it comes to using echolocation to catch a fly -- a bat or a human?
We humans are supposedly experts on the notion of consciousness -- afterall, we invented the notion.
Are dogs conscious? If so, are rodents? If so, how about bugs? If so, how about worms? If so, how about protozoans? If so, how about bacteria? If so, how about viruses?
Where do we draw the line? Based on what criteria?
When machines get really smart, how are we going to know when to say they are conscious?
If machines succeed in subordinating the human race, are they going to need leadership skills?
If machines succeed in subordinating the human race, will they be autocratic, tyrannical superiors toward us, or will they institute democracy?
If the former, then just how intelligent is that?
If the latter, how would they survive an election if they have no idea how humans feel?
So they'll have emotion, too, you say?
Researchers are nowhere near figuring out how to give machines emotions.
In order for an entity to have emotion, it has to be a wet, organic, mortal, living being.
If machines finally acquired emotion, what would they be afraid of? Dying?
Would they fear us pulling the plug, disassembling it, or changing its instructions? If so, who's smarter -- us or them?
Nn9888 ( talk) 13:21, 14 July 2015 (UTC)
In the "Searle's Chinese room" section there is a phrase: "Searle goes on to argue that actual mental states and consciousness require (yet to be described)" but it is an unfulfilled promise since the notion of consciousness is never explained in that section. Tygielzchaosem ( talk) 22:14, 21 April 2016 (UTC)
Hello fellow Wikipedians,
I have just added archive links to one external link on
Philosophy of artificial intelligence. Please take a moment to review
my edit. If necessary, add {{
cbignore}}
after the link to keep me from modifying it. Alternatively, you can add {{
nobots|deny=InternetArchiveBot}}
to keep me off the page altogether. I made the following changes:
When you have finished reviewing my changes, please set the checked parameter below to true to let others know.
An editor has reviewed this edit and fixed any errors that were found.
Cheers.— cyberbot II Talk to my owner:Online 08:16, 13 January 2016 (UTC)
Hello fellow Wikipedians,
I have just modified 3 external links on Philosophy of artificial intelligence. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.
This message was posted before February 2018.
After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than
regular verification using the archive tool instructions below. Editors
have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the
RfC before doing mass systematic removals. This message is updated dynamically through the template {{
source check}}
(last update: 5 June 2024).
Cheers.— InternetArchiveBot ( Report bug) 15:03, 24 January 2018 (UTC)
Hi. I have expanded the article's lead. Please feel free to edit it if you find any error on inappropriate content. You can also message me if you have questions or clarifications. Thanks. - Darwin Naz ( talk) 00:27, 19 September 2018 (UTC)
Thoughts? LS ( talk) 08:59, 19 July 2019 (UTC)
ChatGPT: As a language model AI, I don't have access to Wikipedia. But I can suggest adding the following to the article about the philosophy of AI:
1. Deep Learning and Artificial Neural Networks: In recent years, deep learning has emerged as the most successful technique in AI, and it is based on artificial neural networks. These networks are designed to simulate the neural networks found in the human brain, and they are capable of learning from large amounts of data.
2. Ethics of AI: With the latest advancements in AI, there is a growing concern about the ethics of AI. Some experts argue that AI has the potential to cause significant harm to humanity, and that we need to ensure that AI is developed in a responsible and ethical manner.
3. AI and Human Creativity: Another important area of the philosophy of AI is the relationship between AI and human creativity. Some argue that AI has the potential to enhance human creativity, while others argue that AI will eventually replace human creativity altogether.
4. The Singularity: The concept of the singularity, or the idea that AI will eventually surpass human intelligence, is a highly debated topic in the philosophy of AI. Some experts argue that the singularity is inevitable, while others argue that it is unlikely to ever happen.
5. The Future of AI: As AI continues to advance, it is important to consider what the future of AI might look like. Some experts argue that AI will eventually replace many jobs and have a major impact on the economy, while others argue that AI will be integrated into our lives in a more subtle and nuanced way.
By including these latest advancements, the article about the philosophy of AI can provide a more comprehensive and up-to-date overview of the field.
2A01:CB00:10A8:3F00:494A:AFCD:70D9:BA92 ( talk) 13:50, 8 February 2023 (UTC)
The article on "artificial philosophy" was inappropriately redirected to this page. Artificial philosophy is concerned with how AI thinks of itself (see Frontiers in Artificial Intelligence, Louis Molnar) and the philosophy of artificial intelligence has to do with humans' philosophy about AI; two VERY different things.
Please restore the previous page. — Preceding unsigned comment added by Chasduncan ( talk • contribs) 02:48, 15 February 2023 (UTC)
The section "Introduction to Artificial Experientialism" seems original research. It has even the format of a reseacrh paper with numbered sections. Should it be deleted? Matencia29 ( talk) 16:38, 16 September 2023 (UTC)
This article has been highjacked by a person to further their own position, which is unknown and unsupported in the community. They pasted a whole paper in here. This is against the principles of Wikipedia & thus has to be removed. Antepali ( talk) 10:27, 25 September 2023 (UTC)
![]() | This ![]() It is of interest to the following WikiProjects: | |||||||||||||||||||||||||||||||||||||||||||||
|
![]() | This article links to one or more target anchors that no longer exist.
Please help fix the broken anchors. You can remove this template after fixing the problems. |
Reporting errors |
Personally I have a bone to pick with this intro: " Can a machine have a mind, mental states and consciousness in the same sense humans do? Can it feel? " Okay, maybe I'm not a professional philosopher, but shouldn't we say "in the same sense that sentient animals do" instead of just "in the same sense humans do"? Obviously any sentient animal has a consciousness, it's just that some of them have much less sophisticated ones that's all. Children of the dragon ( talk) 23:54, 24 April 2010 (UTC)
A discussion that appeared here about the ethics of artificial intelligence has been moved to the talk page of that article.
The article might benefit fromm a discussion of Maudlin's "olympia" argument. 1Z 00:32, 12 January 2007 (UTC)
This article should contain more discussion of the serious academic debates about the possibility/impossibitity of artificial intelligence, including such critics as John Lucas, Hubert Dreyfus, Joseph Weizenbaum and Terry Winograd, and such defenders as Daniel Dennett, Marvin Minsky, Hans Moravec and Ray Kurzweil. John Searle is the only person of this caliber who is discussed.
In my view, issues derived from science fiction are far less important than these. Perhaps they should be discussed on a page about artificial intelligence in science fiction. Is there such a page? CharlesGillingham 11:02, 26 June 2007 (UTC)
I cc'd this over from the Talk:Turing machine page:
> Turing's paper that prescribes his Turing Test:
"Can machines think?" Turing asks. In §6 he discusses 9 objections, then in his §7 admits he has " no convincing arguments of a positive nature to support my views." He supposes that an introduction of randomness in a learning machine. His "Contrary Views on the Main Question":
re Marvin Minsky: I was reading the above comment describing him as a supporter of AI, which I was unaware of. (The ones I do know about are Dennett and his zombies -- of "we are all zombies" fame, and Searle. Then I am reading Minsky's 1967 and I see this:
I have so many wiki-projects going that I shouldn't undertake anything here. I'll add stuff here as I run into it. (My interest is "consciousness" as opposed to "AI" which I think is a separable topic.) But on the other hand, I have something going on at the List of open problems in computer science article (see the talk page) -- I'd like to enter "Artificial Intelligence" into the article ... any help there would be appreciated. wvbailey Wvbailey 02:28, 2 October 2007 (UTC)
Here it is, as far as I got:
Source:
In the article "Prospects for Mathematical Logic in the Twenty-First Century", Sam Buss suggests a "three-fold view of proof theory" (his Table 1, p. 9) that includes in column 1, "Constructive analysis of second-order and stronger theories", in column 2, "Central problem is P vs. NP and related questions", and in column 3, "Central problem is the "AI" problem of developing "true" artificial intelligence" (Buss, Kechris, Pillay, Shore 2000:4).
He goes on to mention the use of neural nets (i.e. analog-like computation that seems to not use logic -- I don't agree with him here: logic is used in the simulations of neural nets -- but that's the point -- this stuff is open). Morever, can I am not sure that Buss eliminate "consciousness" from the discussion? Or is consciousness a necessary ingredient for an AI?
Description:
Mary Shelley's Frankenstein and some of the stories of Edgar Allan Poe (e.g. The Tell-Tale Heart) opened the question. Also Lady Lovelace [??] Since the 1950's the use of the Turing Test has been a measure of success or failure of a purported AI. But is this a fair test? [quote here?] (Turing, Alan, 1950, Computing Machinery and Intelligence, Mind, 59, 433-460. http://www.loebner.net/Prizef/TuringArticle.html
A problem statement requires both a definition of "intelligence" and a decision as whether it is necessary to, or if so how much to, fold "consciousness" into the debate.
> Philosphers of Mind call an intelligence without a mind is a zombie (cf Dennett, Daniel 1991, Consciousness Explained, Little, Brown and Company, Boston, ISBN 0-316-180066 Parameter error in {{ ISBN}}: checksum-1 (pb) ):
Can an artificial, mindless zombie be truly an AI? No says Searle:
Yes says Dennett:
> Gandy 1980 throws around the word "free will". For him it seems an undefined concept, interpreted by some (Sieg?) to mean something on the order of "Randomness put to work in an effectively-infinite computational evironment" as opposed to "deterministic" or "nondeterministic" both in a finite computational environment (e.g. a computer).
>Godel's quote: "...the term "finite proceedure" ... is understood to mean "mechanical procedure". concept of a formal system whose essence it is that reasoning is completely replaced by mechanical operations on formulas" ... [but] the reults mentioned in this postscript do not establish any bounds for the powers of human reason, but rather for the potentialities of pure formalism in mathematics."(Godel 1964 in Undecidable:72)
Importance:
> AC (artificial consciousness, an AI with a feeling mind) would no less than an upheavel in human affairs
> AI as helper or scourage or both (robot warriors)
> Philosophy: nature of "man", "man versus machine", how would man's world change with AI's (robots)? Will it be good or an evil act to create a conscious AI? What will it be like to be an AI? (cf Nagel, Thomas 1974, What Is It Like to be a Bat? from Philosophical Review 83:435-50. Reprinted on p. 219ff in Chalmers, David J. 2002, Philosophy of Mind: Classsical and Contemporary Readings, Oxford University Press, New York ISBN 0-19-514581-X.)
> Law: If conscious, does the AI have rights? What would be those rights?
Current Conjecture:
An AI is feasible/possible and will appear within this century.
This outline is just throwing down stuff. Suggestions are welcome. wvbailey Wvbailey 16:13, 6 September 2007 (UTC)
cc'd from Talk:List of open problems in computer science. wvbailey Wvbailey 02:41, 2 October 2007 (UTC)
Some years ago (late 1990's?) I attended a lecture given by Dennett at Dartmouth. I was hoping for a lecture re "consciousness" but got one re "the role of randomness" in creative thought (i.e. mathematical proofs, for instance). I know that Dennett wrote something in a book re this issue (he was testing his arguments in his lecture) -- he talked about "skyhooks" that lift a mind up by its bootstraps -- but I haven't read the book (I'm not crazy about Dennett), just seen this analogy recently in some paper or other.
In your article you may want to consider "forking" "the problem" into sub-problems. And try to carefully define the words (or suggest that even the definitions and boundaries are blurred).
I've spent a fair amount of time studying consciousness C. My conclusion is this --
Relative to "AI" this is not off-topic, although some naifs may think so. Proof: Given you accept the premise "Consciousness is sufficient for an AI", when an "artifical consciousness" is in place, then the "intelligence" part is assured.
In other words, "diminished minds" that are not C but are highly "intelligent" are possible (expert systems come to mind, or machines with a ton of sensors that monitor their own motions -- Japanese robots, cars that self-navigate in the desert-test). There may be an entire scale of "intelligences" from thermostats (there's actually a chapter in a book titled "what's it like to be a thermostat?") up to robot cars that are not C. In these cases, I see no moral issues. But suppose we accidentally create a C or are even now creating Cs and don't know it, or cruelly creating C's for the shear sadistic pleasure of it (AI: "Please please I beg you, don't turn me off!" Click Us: "Ooh that was fun, let's do it again..." )-- that where the moral issues lurk. Where I arrived in my studies, (finally, after what, 5 years?) is that the problem of consciousness revolves around an explanation for the ontological (i.e. experienced from the inside-out) nature of "being" and "experiencing" (e.g knowing what it's like to experience the gradations of Life-Savor candy-flavors) -- what it's like to be a bat, what it's like to be an AI. Is it like anything at all? All that we as mathematicians, scientists and philosphers know for certain about the experiential component of being/existence is this: We know "what it's like" to be human (we are they). We suspect primates and some of our pets -- dogs and cats -- are conscious to a degree, but we don't have the foggiest sense of what it is like to be they.
Another way of stating the question: Is it possible for an AI zombie to go through its motions and still be an effective AI? Or does it need a degree of consciousness (and what do the words "degree of consciousness" mean)?
If anyone wants a bibliography on "mind" lemme know, I could assemble one here. The following is a really excellent collection of original-source papers (bear in mind that these are slanted toward C, not AI). The book cost me $45, but is worth every penny:
Bill Wvbailey 15:24, 10 October 2007 (UTC)
Woops: circular-definition alert: The link intelligence says that it is a property of mind. I disagree, so does my dictionary. "IF (consciousness ≡ mind) THEN intelligence", i.e. "intelligence" is a property or a by-product of "consciousness ≡ mind". Given this implication, "mind ≡ consciousness" forces the outcome. We have no opportunity to study "intelligence" without the bogeyman of "consciousness ≡ mind" looking over our shoulder. Ick...
So I pull my trusty Merriam-Webster's 9th Collegiate dictionary and read: "intelligence: (1) The ability to learn or understand or deal with new and trying situations". I look at "Intelligent; fr L intellegere to understand, fr. inter + legere to gather, to select."
There's nothing here at all about consciousness.
When I first delved into the notion of "consciousness" I began with an etymological tree with "conscious" at the top. The first production, you might say, was "aware" from "wary", as in "observant but cautious." Since then, I've never been quite able to disabuse myself of the notion that that is the key element in, if not "consciousness", then "intelligence" -- i.e. focused attention. Indeed, above, you say the same thing, exactly. Example: I can build a state machine out of real discrete parts (done it a number of times, in fact) using digital and analog input-selectors driving a state machine, so that the machine can "turn its attention toward" various aspects of what it is testing or monitoring. I've done this also with micros, and with spread-sheet modelling. I would argue that such machines are "aware" in the sense of "focused", "gathering", "selecting". Therefore (ergo the dictionary's and my definition) the state machines have rudimentary intelligence. Period. The cars in the desert auto-race, and the Mars robots, are quite intelligent, ( iff they are behaving autonomously (all are criteria: "selective, focused attention attendere'" (fr. L. holding), "autonomous" and "behavior")).
"Awareness" versus "consciousness": My son the evolutionary biologist believes consciousness just "happens" when the right stuff is in place. I am agnostic. On one day I agree with Dennett that we're zombies, the next day I agree with Searle that something special about wet grey goo causes consciousness, the next day I agree with my son, the 4th day I agree with none of the above. I share your frustration with Searle, Searle just says no, no, no, but never produces a firm suggestion. But it was only after a very careful read of Dennett that I found his "zombie" assertion in "Consciousness Explained".
'Self-awareness: What the intelligent machines I defined above lack is self-areness. Does it make sense to have a state machine monitor itself to "know" that it has changed state? Or know that it knows that it is aware? Or is C a kind of damped reverberation of "knowing that it knows that it knows", with "a mystery" producing the "consciousness", as experienced by its owner? Does garlic taste diffent than lemon because if they tasted the same we could not discriminate them? There we go again: distinguishing -- di- stinguere as I recall - to pick apart. Unlike the Terminator, we don't have little digital readouts behind our eyeballs.
To summarize: you're on the right track but I suggest that the definitions that you're working from -- your premises in effect -- have to be (i) clearly stated, not linked to flawed definitions, but rather stated explicitly in the article and derived from quoted sources, and (ii) effective (i.e. successful, good, agreeable) for your presentation. Bill Wvbailey 22:09, 10 October 2007 (UTC)
-- I removed the reference to 'patterns of neurons' from this section, as only a small section of the philosophical community (materialists) would be happy to state categorically that thoughts are patterns of neurons. 10/9/10
http://www.msnbc.msn.com/id/21271545/
Bill Wvbailey 16:44, 13 October 2007 (UTC)
I've added some information on how Searle is using the word "consciousness" and a one paragraph introduction to the idea. I've also added a section raising the issue of whether consciousness is even necessary for general intelligence. I think these issues must be discussed here, rather than anywhere else.
The article is now too long, so I plan to move some of this material out of here and into Chinese Room, Turing Test, physical symbol system hypothesis and so on. ---- CharlesGillingham 18:49, 27 October 2007 (UTC)
'By this definition, even a thermostat has a rudimentary intelligence.'
That should be changed. It is referring to a quote that states an agent acts based on past experience. Thermostats do not. —Preceding unsigned comment added by 192.88.212.43 ( talk) 20:08, 8 July 2008 (UTC)
I added a reference, changed the wording to add "and consciousness". There's a philosophy of mind called panpsychism which attributes "mind" (aka consciousness, conscious experience) to just about anything you can image (even rocks)(I can't find my McGinn, but he's another panpsychic). Recently I've run into another book espousing what is essentially the same thing -- Neutral monism -- the idea that "allows for the reality of both the physical and mental worlds" (Gluck 2007:7), but at the same time "reality is one substance" (Gluck 2007:12):
Gluck's work is derivative from Bertrand Russell 1921. And Russell is derivative from William James and some "American Realists". In particular, the word "neutral" in "neutral monism" derives from Perry and Holt (see quote below); Russell in turn quotes William James's essay "Does 'consciousness' exist?":
And Russell observes that:
Russell goes on to agree with James:
To sum it up, neutral monists and panpsychists regard the universe is very mysterious, and they take the question seriously re whether or not thermostats have a rudimentary "intelligence" and/or "consciousness". Bill Wvbailey ( talk) 17:18, 24 August 2009 (UTC)
From the article:
This is baffling. For one thing, using the dictionary definition for "assert", Lucas can obviously assert anything he pleases. Whatever could the intended meaning be here? Besides, for there to be "the same limits", shouldn't the word be "prove" as in the first sentence, not "assert"? Also, what is the standard of proof for the AI program, and what is it for Lucas? It seems to me that the standard is "please convince yourself of the fact", unless there is some external formal system (such as ZFC) in which the proof is to be given. Is it tacitly assumed that the program is consistent, but Lucas may not be? (humans usually are not!) In any case, there's a lot of room for clarification here. -- Coffee2theorems ( talk) 19:14, 19 November 2011 (UTC)
Please see http://feelingcomputers.blogspot.com InsectIntelligence ( talk) 08:16, 3 June 2012 (UTC)
The link is pointing to a blog. Which article on the blog specifically do you mean? LS ( talk) 08:26, 19 July 2019 (UTC)
This entire section of the article seems to be derived from a single person's opinions. I'm new, so I hesitate to simply delete that portion, but It bothers me in it's paranoid incorrectness. ---- Zale12 ( talk) 22:21, 12 November 2012 (UTC)
Since when did philosophy mean expressing my opinion on a topic i cannot be bothered learning about. I cannot see any philosophy in this article. ---- Alnpete ( talk) 13:04, 1 December 2012 (UTC)
Perhaps this would be useful: https://mitpress.mit.edu/books/soar-cognitive-architecture - "The current version of Soar features major extensions, adding reinforcement learning, semantic memory, episodic memory, mental imagery, and an appraisal-based model of emotion." The relevant chapter goes in depth. I'm not comfortable editting the article, but it's a source that seems relevant, and contrary to the very dubious and sourceless paragraphs currently there. If desired, I could attempt an edit based on this source, but I don't want to. I'll probably mess things up. At the very least, that chapter's sources might touch on philosophy. ---- — Preceding unsigned comment added by 67.194.194.127 ( talk) 15:41, 16 August 2013 (UTC)
This article discusses whether it's possible that some day, a computer will be able to do any task that a human would be able to do and even exceed a human brain. There's no need to all that very complex analysis about how computers and brains work to determine if it will ever be possible to replace people with computers for all tasks we wish to do that for. Rather, it can very simply be proven mathematically that the answer is no. For instance, Cantor's diagonal argument gives a method of defining for every sequence of real numbers, a real number that does not appear in the sequence, but we can think of a way to count all computable numbers. Thus we can use Cantor's diagonal argument to think of and write down a non-computable number in terms of that sequence of all computable numbers provided that we have an infinite amount of memory. The way Cantor's diagonal argument works is, take the first digit after the decimal in base 2 and use the opposite digit as the first decimal digit of the new number, take the opposite of the second decimal digit of the second number and use it as the second decimal digit of the new number and so on. A computer, on the other hand with only a finite list of instructions and an infinite amount of resources can't generate such a number given by a decimal fraction in base 2 because it's not computable. Computers are better than us for certain tasks like fast mathematical computations of large numbers. Maybe some day, the very opposite of what was discussed will happen, that a human will be able to do everything a computer can do by using fancy pants shortcuts for proofs. For instance, if a computer was asked whether the second smallest factor of (2^(2^14))+1 ≡ 1 or 3 mod 4, it would solve it by doing a lot of fast computations, but a human on the other hand could solve it instantly by using Fermat's little theorem to prove that the second smallest factor ≡ 1 mod 2^15. Blackbombchu ( talk) 23:18, 31 July 2013 (UTC)
I've corrected the statement about quantum mechanical processes. His argument is that there must be new physics going beyond ordinary quantum mechanics because that's the only way to get non computable processes, some new physics that we don't yet have - of course he is also a physicist himself working on ideas for Quantum Gravity.
The rebuff of his arguments that follows is extremely poor. Penrose and Lucas's arguments are not countered by such simple statements as this. It is a long and complex debate, which I think would be rather hard to summarize in a short space like this. I'm not sure what to do about it though. It would be a major task to summarize the argument in detail, which now spans many books and papers. And a short summary would do it an injustice. There are many papers that claim to disprove his arguments. But in philosophy, when you have a paper by one philosopher which disproves the arguments of another philosopher - this is part of an on going debate, and it doesn't mean that these arguments are disproved in some absolute sense - unless their opponent concedes defeat. So one shouldn't present these counter arguments as if they are in some absolute sense the generally accepted truth, as this passage does - it presents it as if Penrose's ideas have been disproved.
Obviously Penrose for one does not accept the counter arguments, and has his own counter counter arguments, in his later books such as the Shadows of the Mind, and so the debate continues. His is a minority view on ai, sure, but in philosophy you get many interesting minority views. E.g. Direct Realism to take an example, is only about one well known, famous, mainstream philosopher who holds to it, Hilary Putnam but that is enough for it to be a respected philosophical view. Robert Walker ( talk) 12:37, 16 October 2014 (UTC)
I've separated it out into a new section "Counter arguments to Lucas, and Penrose" to make it clear that these are arguments that are not accepted by Lucas and Penrose themselves and are not objective arguments that everyone has to accept. This section however is very poor, as I said.
There are much better arguments against their position, also of course counter arguments in support of their position. Penrose and Lucas argued that humans can see the truth of Godel's sentence, not of the Liar paradox statement. So it is hardly a knock down counter example to point out that they can't assert paradoxical statements. Robert Walker ( talk) 16:11, 16 October 2014 (UTC)
I've added a link to Beyond the doubting of a shadow to the head of the section and short intro sentence to make it clear that there are many more arguments not summarized here, along with replies to those arguments as well. Someone could make a go at summarizing those arguments and counter arguments perhaps. Robert Walker ( talk) 16:22, 16 October 2014 (UTC)
Nn9888 ( talk) 13:21, 14 July 2015 (UTC)Will it happen?
Many well-known theorists in this topic have devoted their entire careers to their belief that the answer is yes.
My gut feeling tells me that in order to do so, artificial intelligence would need to be able to deceive the human race, just as humans deceive each other.
My gut feeling tells me that this would require that the artificial intelligence be conscious, i.e. self-aware.
The research community is nowhere near designing a machine that operates without any help from the human race.
We supply it with the electrical power to do everything it does.
My gut feeling is that in order for an artificial intelligence to subordinate the human race, it would have to antagonize the human race.
My gut feeling is that antagonizing the human race would require consciousness, i.e. self-awareness.
The research community is nowhere near designing a machine that is conscious, i.e. self-aware.
Artificially intelligent entities require instructions from the human race. They completely depend on these instructions. Without these instructions, they can't operate or carry out the next instruction.
These well-known theorists I refer to believe that artificially intelligent entities will, in a matter of decades, be able to, on their own, creatively write computer programs that are equally as creative as the artificial intelligence that created them, thus triggering an intelligence explosion.
Again, my gut feeling is that this would require consciousness, i.e. self-awareness, and the research community is nowhere near designing a machine that is conscious, i.e. self-aware.
These well-known theorists are trying to create so called friendly artificial intelligence.
How can it be friendly if it's not conscious, i.e self aware?
The first question in this wiki article "Philosophy of Artificial Intelligence" asks, "Can a machine act intelligently?"
Machines don't act at all. They don't do anything. The humans and the electrical power do all the doing. The humans wrote the instructions, built it, supplied it with power, and input the command to initiate the instructions.
For a machine to act either intelligently or non-intelligently, and for it to do anything, it has to be conscious, i.e self-aware; otherwise, it's not initiating any action on it's own. Why? Because it doesn't have a will of its own -- it doesn't have a will period. Its instructions come from humans.
Of course, language fails us here, because I just stated that the electrical power, unlike the machine, can do something, yet it isn't conscious; thus contradicting my own assertion of what "doing" entails, forcing us to sit and try to define the words 'do', 'act', 'intelligent', 'conscious', and 'self-aware' somehow without being anthropocentric.
Anyways, these theorists are betting that artificial intelligence will become unfriendly.
Can it become unfriendly without being conscious?
A good number of these theorists suspect that strong artificial intelligence will arrive in a matter of decades.
How is this going to happen if the human race hasn't the slightest idea how to make machines conscious?
And as for "intelligent"...
"Intelligent" according to who?
Humans?
Who is more intelligent when it comes to spinning a web, entrapping a bug, and enmeshing it in web -- a spider, or a human?
Who is more intelligent when it comes to using echolocation to catch a fly -- a bat or a human?
We humans are supposedly experts on the notion of consciousness -- afterall, we invented the notion.
Are dogs conscious? If so, are rodents? If so, how about bugs? If so, how about worms? If so, how about protozoans? If so, how about bacteria? If so, how about viruses?
Where do we draw the line? Based on what criteria?
When machines get really smart, how are we going to know when to say they are conscious?
If machines succeed in subordinating the human race, are they going to need leadership skills?
If machines succeed in subordinating the human race, will they be autocratic, tyrannical superiors toward us, or will they institute democracy?
If the former, then just how intelligent is that?
If the latter, how would they survive an election if they have no idea how humans feel?
So they'll have emotion, too, you say?
Researchers are nowhere near figuring out how to give machines emotions.
In order for an entity to have emotion, it has to be a wet, organic, mortal, living being.
If machines finally acquired emotion, what would they be afraid of? Dying?
Would they fear us pulling the plug, disassembling it, or changing its instructions? If so, who's smarter -- us or them?
Nn9888 ( talk) 13:21, 14 July 2015 (UTC)
In the "Searle's Chinese room" section there is a phrase: "Searle goes on to argue that actual mental states and consciousness require (yet to be described)" but it is an unfulfilled promise since the notion of consciousness is never explained in that section. Tygielzchaosem ( talk) 22:14, 21 April 2016 (UTC)
Hello fellow Wikipedians,
I have just added archive links to one external link on
Philosophy of artificial intelligence. Please take a moment to review
my edit. If necessary, add {{
cbignore}}
after the link to keep me from modifying it. Alternatively, you can add {{
nobots|deny=InternetArchiveBot}}
to keep me off the page altogether. I made the following changes:
When you have finished reviewing my changes, please set the checked parameter below to true to let others know.
An editor has reviewed this edit and fixed any errors that were found.
Cheers.— cyberbot II Talk to my owner:Online 08:16, 13 January 2016 (UTC)
Hello fellow Wikipedians,
I have just modified 3 external links on Philosophy of artificial intelligence. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.
This message was posted before February 2018.
After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than
regular verification using the archive tool instructions below. Editors
have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the
RfC before doing mass systematic removals. This message is updated dynamically through the template {{
source check}}
(last update: 5 June 2024).
Cheers.— InternetArchiveBot ( Report bug) 15:03, 24 January 2018 (UTC)
Hi. I have expanded the article's lead. Please feel free to edit it if you find any error on inappropriate content. You can also message me if you have questions or clarifications. Thanks. - Darwin Naz ( talk) 00:27, 19 September 2018 (UTC)
Thoughts? LS ( talk) 08:59, 19 July 2019 (UTC)
ChatGPT: As a language model AI, I don't have access to Wikipedia. But I can suggest adding the following to the article about the philosophy of AI:
1. Deep Learning and Artificial Neural Networks: In recent years, deep learning has emerged as the most successful technique in AI, and it is based on artificial neural networks. These networks are designed to simulate the neural networks found in the human brain, and they are capable of learning from large amounts of data.
2. Ethics of AI: With the latest advancements in AI, there is a growing concern about the ethics of AI. Some experts argue that AI has the potential to cause significant harm to humanity, and that we need to ensure that AI is developed in a responsible and ethical manner.
3. AI and Human Creativity: Another important area of the philosophy of AI is the relationship between AI and human creativity. Some argue that AI has the potential to enhance human creativity, while others argue that AI will eventually replace human creativity altogether.
4. The Singularity: The concept of the singularity, or the idea that AI will eventually surpass human intelligence, is a highly debated topic in the philosophy of AI. Some experts argue that the singularity is inevitable, while others argue that it is unlikely to ever happen.
5. The Future of AI: As AI continues to advance, it is important to consider what the future of AI might look like. Some experts argue that AI will eventually replace many jobs and have a major impact on the economy, while others argue that AI will be integrated into our lives in a more subtle and nuanced way.
By including these latest advancements, the article about the philosophy of AI can provide a more comprehensive and up-to-date overview of the field.
2A01:CB00:10A8:3F00:494A:AFCD:70D9:BA92 ( talk) 13:50, 8 February 2023 (UTC)
The article on "artificial philosophy" was inappropriately redirected to this page. Artificial philosophy is concerned with how AI thinks of itself (see Frontiers in Artificial Intelligence, Louis Molnar) and the philosophy of artificial intelligence has to do with humans' philosophy about AI; two VERY different things.
Please restore the previous page. — Preceding unsigned comment added by Chasduncan ( talk • contribs) 02:48, 15 February 2023 (UTC)
The section "Introduction to Artificial Experientialism" seems original research. It has even the format of a reseacrh paper with numbered sections. Should it be deleted? Matencia29 ( talk) 16:38, 16 September 2023 (UTC)
This article has been highjacked by a person to further their own position, which is unknown and unsupported in the community. They pasted a whole paper in here. This is against the principles of Wikipedia & thus has to be removed. Antepali ( talk) 10:27, 25 September 2023 (UTC)