This article is rated C-class on Wikipedia's
content assessment scale. It is of interest to the following WikiProjects: | |||||||||||||||||||||||||||||||||
|
This page, formerly a redirect page to symbolic AI, has now been expanded to explain why GOFAI does not mean the same as symbolic AI, but instead refers to a specific approach that has long since been extended. The GOFAI term is still heavily used to caricature symbolic AI in a way that is now quite inappropriate. If the conflation of GOFAI and symbolic AI is not addressed the confusions will just continue.
The term could be addressed in the article on symbolic AI but that would be distracting from the main article, especially in explaining the conflations and showing examples required to justify the view.
I do not know exactly why the terms are conflated, but I suspect there are two reasons. One is just from a lack of familiarity with symbolic AI from newer students who are entirely and solely immersed in deep learning. The other reason may be a deliberate confusion to present deep learning as a new paradigm to totally replace symbolic AI, by ignoring symbolic machine learning to imply symbolic AI was solely fixated on expert systems, and further denigrating the use of symbols as "aether", as Hinton has said. Gary Marcus has pointed out there is considerable animus to the use of symbols in the current leaders of the deep learning movement.
Whatever the reason, it is time the confusions were addressed explicitly.
Veritas Aeterna ( talk) 22:26, 19 September 2022 (UTC)
While I whole heartedly agree in principle in breaking off from Symbol, I vehemently object to doing it like this with zero reference even to the fact that this formerly referred to Symbolic and all of the current content is ORy, fluffy, and doesn't even refer to the former redirect which btw embodied the general understaning, the thing that would be found most supportable. Lycurgus ( talk) 15:14, 24 September 2022 (UTC)
The article says said:
Is this a typo? Obviously these are all GOFAI, at least by the common definition of "the dominant paradigm of AI research from 1956 to 1995 or so".
If it isn't a typo, I suppose I should try to offer proof. Here goes. All of these were developed or (at least discussed) AI researchers in the 60s & 70s working in old-fashioned symbolic AI. Heuristic search is the quintessential GOFAI algorithm; almost every program used it. PLANNER was a GOFAI planning algorithm. Constraint satisfaction is a search algorithm over a space of symbolic expressions (logical and numeric) -- strictly GOFAI. Semantic Webs go back to the 50s. Ontologies (that is, "common sense knowledge bases") were proposed in the 70s by Schank, Minsky & others. Non-monotonic logic was part of the work by McCarthy's group at Stanford. Theorem proving (by heuristic search) goes back to Logic Theorist (56), symbolic mathematics goes back at least to Gelernter's geometry theorem prover. ---- CharlesTGillingham ( talk) 06:01, 3 July 2023 (UTC)
The standard authority in this area is Russell & Norvig. See p. 982,
27.1.1 The argument from informality Turing's "argument from informality of behavior" says that human behavior is far too complex to be captured by any formal set of rules humans must be using some informal guidelines that (the argument claims) could never be captured in a formal set of rules and thus could never be codified in a computer program.
A key proponent of this view was Hubert Dreyfus, who produced a series of influential critiques of artificial intelligence: What Computers Can’t Do (1972), the sequel “What Computers Still Can't Do (1992)”, and, with his brother Stuart, Mind Over Machine (1986). Similarly, philosopher Kenneth Sayre (1993) said "Artificial intelligence pursued within the cult of computationalism stands not even a ghost of a chance of producing durable results." The technology they criticized came to be called Good Old-Fashioned AI (GOFAI). GOFAI corresponds to the simplest logical agent design described in Chapter 7, and we saw there that it is indeed difficult to capture every contingency of appropriate behavior in a set of necessary and sufficient logical rules; we called that the qualification problem. But as we saw in Chapter 12, probabilistic reasoning systems are more appropriate for open-ended domains, and as we saw in Chapter 21, deep learning systems do well on a variety of "informal" tasks. Thus, the critique is not addressed against computers per se, but rather against one particular style of programming them with logical rules—a style that was popular in the 1980s but has been eclipsed by new approaches. One of Dreyfus's strongest arguments is for situated agents rather than disembodied logical inference engines. An agent whose understanding of "dog" comes only from a limited set of logical sentences such as "Dog (x) => Mammal(x)" is at a disadvantage compared to an agent that has watched dogs run, has played fetch with them, and has been licked by one. As philosopher Andy Clark (1998) says, "Biological brains are first and foremost the control systems for biological bodies. Biological bodies move and act in rich real-world surroundings.
According to Clark, we are "good at frisbee, bad at logic.”
The embodied cognition approach claims that it makes no sense to consider the brain separately: cognition takes place within a body, which is embedded in an environment. We need to study the system as a whole; the brain's functioning exploits regularities in its envi-onment, including the rest of its body. Under the embodied cognition approach, robotics, rision, and other sensors become central, not peripheral.
Overall, Dreyfus saw areas where AI did not have complete answers and said that Al is herefore impossible; we now see many of these same areas undergoing continued research nd development leading to increased capability, not impossibility.
"GOFAI" is NOT an NPOV term for "the dominant paradigm of AI research from 1956 to 1995", instead it is pejorative, and not technically correct. Haugeland's book came out in 1986, and so what he called GOFAI does not fairly describe what came after.
Veritas Aeterna ( talk) 23:37, 3 July 2023 (UTC)
The article said, in the section about "rule based systems":
Haugeland's GOFAI was not strictly rule-based systems, but any system that used high level symbols to represent knowledge or mental states or thoughts or produce intelligent behavior.
Haugeland's GOFAI is any work that assumes:
1. our ability to deal with things intelligently is due to our capacity to think about them reasonably (including sub-conscious thinking); and
2. our capacity to think about things reasonably amounts to a faculty for internal “automatic” symbol manipulation.— Artificial Intelligence: The Very Idea, pg. 113
This is basically a form of the physical symbol systems hypothesis (with some fine-tuning only of interest to philosophers). If you're more familiar with the PSSH than you are with Haugeland, you can take GOFAI to mean a "physical symbol system".
For Haugeland, GOFAI is more of a philosophical position than a branch of AI. It's an assumption that's implicit in symbolic AI projects, especially when they start making predictions, or assuming that symbolic AI is all you need for intelligent behavior. So, if we're to take it as a branch of AI, it has to be any AI research that (a) uses symbols, and (b) thinks that's enough.
So, anyway, Haugeland doesn't belong in this section, because his use of the term is slightly different than the definition used most often today: "the dominant paradigm of AI research from 1956 to around 1995", and is definitely directed towards all symbolic AI, not just rule-based systems.
I should probably add this material to the article. --- CharlesTGillingham ( talk) 07:18, 3 July 2023 (UTC)
First of all, Dreyfus' critique was first published in 1965, before rule based systems existed.
He directly criticized the work that had been done by 1965, especially the "cognitive simulation" program of research at CMU. (I.e., Newell and Simon), and he harshly criticized Simon's public claims to have created a "mind" in 1956 and the nutty predictions he made in the 60s. So all the vitriolic stuff was definitely not just rule based systems. It was AI in the middle 1960s.
The better part is his four assumptions. His target is clearly "symbols manipulated by formal rules". The "formal rules" here are the instructions for moving the symbols around and making new ones, i.e., a computer program. They are not production rules (production rules weren't invented until ten years later).
There's nothing in his work that suggests he was talking about production-rule-based systems exclusively, or that his critique didn't apply to any work that manipulated symbols according to instructions.
On R&N (2021). They mention three things that aren't targets of Dreyfus' critique: (1) "Subsymbolic" stuff. Sure. Dreyfus himself says this. (2) Probabilistic models. It's true that this addresses the qualification problem, but it can't come close to solving the general problem. It's also true that in soft computing, the fuzziness overcomes the limits of symbolic AI. I'm not sure what he would say about this. (3) In the last line, they throw in "learning". Dreyfus didn't say symbolic AI can't learn or specifically he was only talking about things that can't learn. Samuel's Checkers was around, and there was a big dust up with Dreyfus over chess programs (Mac Hack, specifically), so I have to assume he was aware of it. There's no reason to assume he was somehow ignoring it.
I realize that R&N is the most reliable source we have, but in this case, I think Dreyfus is more reliable when we're talking about Dreyfus.
I think R&N missed an important point. Dreyfus' critique only applies to people who think that symbolic AI is sufficient for intelligence, so it doesn't apply to neurosymbolic mashups, or to "narrow" uses of symbolic AI. Dreyfus never said symbolic AI was useless. He said it couldn't do all the intelligent things people do. ---- CharlesTGillingham ( talk) 08:34, 3 July 2023 (UTC)
I agree with you that AI "couldn't do all the intelligent things people do." and still can't by itself. Neither can Deep Learning. There will be a synthesis.
Veritas Aeterna ( talk) 23:53, 3 July 2023 (UTC)
I added a section about Haugeland's original use of "GOFAI" and what he was talking about.
I cut all the material that was based on mistaken idea that GOFAI referred only to production-rule reasoning systems. There was a lot. ---- CharlesTGillingham ( talk) 12:28, 3 July 2023 (UTC)
Let me know your ideas at this point, e.g., if you'd like to add a section more closely hewing to Haugeland's intentions, versus common use now.
I will add more sources justifying that symbolic AI is not well characterized by GOFAI, too.
Veritas Aeterna ( talk) 20:31, 3 July 2023 (UTC)
I get the feeling you're not reading what I wrote above.
It is literally impossible that the "rules" Dreyfus is referring to are production rules. Production rules had not yet been invented in 1965 when Dreyfus first published his critique. They are "instructions for manipulating symbols" -- that is, a computer program.
It also literally impossible that Dreyfus' critique applies only to production-rule systems of the 1980s. Production rule systems of the 1980s did not exist in 1965 when Dreyfus first published his critique. It is directly addressed to AI before 1965, because that when it was written.
R&N does not dispute this, except maybe in the snarky final joke of the quote. I don't think they intended this joke to be taken seriously. ---- CharlesTGillingham ( talk) 21:59, 3 July 2023 (UTC)
You've defined "GOFAI" as "a restricted kind of symbolic AI, namely rule-based or logical agents. This approach was popular in the 1980s, especially as an approach to implementing expert systems."
The Cambridge Handbook of Artificial Intelligence:
The Stanford Encyclopedia of Philosophy, "The logic of action":
These are reliable sources. They define GOFAI as symbolic AI.
They don't restrict it the 80s. They don't tie it production rules (the technique behind expert systems).
Haugeland coined the term to describe programs that were "formal symbols manipulated by a set of formal rules". The "rules" here is like the "rules of chess" -- the rules governing the formal system. They are not production rules. Dreyfus (and every other philosopher talking about computationalism) is working from this definition as well.
R&N define the term as "a system that reasons logically from a set of facts and rules describing the domain". It does not mention the 80s and it doesn't explicitly say "production rules". This could equally describe McCarthy's work on logic in the late 60s.
If the definition of the term is in dispute, then, WP:NPOV we need to give each of the reliable sources a voice. We need to cover the common definition, philosophy's definition, and (if you insist) R&N's definition. ---- CharlesTGillingham ( talk) 22:31, 3 July 2023 (UTC)
See WP:coatrack. This topic should be discussed in (short) subsection of symbolic AI ---- CharlesTGillingham ( talk) 22:40, 3 July 2023 (UTC)
I've tagged the article. Note there is a "detailed complaint on the talk page" per WP:DETAG.
After you've replied, I will add the Cambridge Handbook definition at the top, mark R&N's definition as specific to them, and re-add the section describing Haugeland's use of the term. I will also tag other sections that are off-topic or unclear and leave them to you. I also leave it to you deal with the coatrack issue. See below. ----
CharlesTGillingham (
talk)
23:03, 3 July 2023 (UTC)
I have a solution that I think will satisfy both of us. We split our coverage of "symbolic AI" into two parts
We put the first in symbolic AI, we put the second in GOFAI. Specifically:
Does this seem good to you? ---- CharlesTGillingham ( talk) 16:25, 4 July 2023 (UTC)
The technology they criticized came to be called Good Old-Fashioned AI (GOFAI). GOFAI corresponds to the simplest logical agent design described ... and we saw ... that it is indeed difficult to capture every contingency of appropriate behavior in a set of necessary and sufficient logical rules; we called that the qualification problem.
This article is rated C-class on Wikipedia's
content assessment scale. It is of interest to the following WikiProjects: | |||||||||||||||||||||||||||||||||
|
This page, formerly a redirect page to symbolic AI, has now been expanded to explain why GOFAI does not mean the same as symbolic AI, but instead refers to a specific approach that has long since been extended. The GOFAI term is still heavily used to caricature symbolic AI in a way that is now quite inappropriate. If the conflation of GOFAI and symbolic AI is not addressed the confusions will just continue.
The term could be addressed in the article on symbolic AI but that would be distracting from the main article, especially in explaining the conflations and showing examples required to justify the view.
I do not know exactly why the terms are conflated, but I suspect there are two reasons. One is just from a lack of familiarity with symbolic AI from newer students who are entirely and solely immersed in deep learning. The other reason may be a deliberate confusion to present deep learning as a new paradigm to totally replace symbolic AI, by ignoring symbolic machine learning to imply symbolic AI was solely fixated on expert systems, and further denigrating the use of symbols as "aether", as Hinton has said. Gary Marcus has pointed out there is considerable animus to the use of symbols in the current leaders of the deep learning movement.
Whatever the reason, it is time the confusions were addressed explicitly.
Veritas Aeterna ( talk) 22:26, 19 September 2022 (UTC)
While I whole heartedly agree in principle in breaking off from Symbol, I vehemently object to doing it like this with zero reference even to the fact that this formerly referred to Symbolic and all of the current content is ORy, fluffy, and doesn't even refer to the former redirect which btw embodied the general understaning, the thing that would be found most supportable. Lycurgus ( talk) 15:14, 24 September 2022 (UTC)
The article says said:
Is this a typo? Obviously these are all GOFAI, at least by the common definition of "the dominant paradigm of AI research from 1956 to 1995 or so".
If it isn't a typo, I suppose I should try to offer proof. Here goes. All of these were developed or (at least discussed) AI researchers in the 60s & 70s working in old-fashioned symbolic AI. Heuristic search is the quintessential GOFAI algorithm; almost every program used it. PLANNER was a GOFAI planning algorithm. Constraint satisfaction is a search algorithm over a space of symbolic expressions (logical and numeric) -- strictly GOFAI. Semantic Webs go back to the 50s. Ontologies (that is, "common sense knowledge bases") were proposed in the 70s by Schank, Minsky & others. Non-monotonic logic was part of the work by McCarthy's group at Stanford. Theorem proving (by heuristic search) goes back to Logic Theorist (56), symbolic mathematics goes back at least to Gelernter's geometry theorem prover. ---- CharlesTGillingham ( talk) 06:01, 3 July 2023 (UTC)
The standard authority in this area is Russell & Norvig. See p. 982,
27.1.1 The argument from informality Turing's "argument from informality of behavior" says that human behavior is far too complex to be captured by any formal set of rules humans must be using some informal guidelines that (the argument claims) could never be captured in a formal set of rules and thus could never be codified in a computer program.
A key proponent of this view was Hubert Dreyfus, who produced a series of influential critiques of artificial intelligence: What Computers Can’t Do (1972), the sequel “What Computers Still Can't Do (1992)”, and, with his brother Stuart, Mind Over Machine (1986). Similarly, philosopher Kenneth Sayre (1993) said "Artificial intelligence pursued within the cult of computationalism stands not even a ghost of a chance of producing durable results." The technology they criticized came to be called Good Old-Fashioned AI (GOFAI). GOFAI corresponds to the simplest logical agent design described in Chapter 7, and we saw there that it is indeed difficult to capture every contingency of appropriate behavior in a set of necessary and sufficient logical rules; we called that the qualification problem. But as we saw in Chapter 12, probabilistic reasoning systems are more appropriate for open-ended domains, and as we saw in Chapter 21, deep learning systems do well on a variety of "informal" tasks. Thus, the critique is not addressed against computers per se, but rather against one particular style of programming them with logical rules—a style that was popular in the 1980s but has been eclipsed by new approaches. One of Dreyfus's strongest arguments is for situated agents rather than disembodied logical inference engines. An agent whose understanding of "dog" comes only from a limited set of logical sentences such as "Dog (x) => Mammal(x)" is at a disadvantage compared to an agent that has watched dogs run, has played fetch with them, and has been licked by one. As philosopher Andy Clark (1998) says, "Biological brains are first and foremost the control systems for biological bodies. Biological bodies move and act in rich real-world surroundings.
According to Clark, we are "good at frisbee, bad at logic.”
The embodied cognition approach claims that it makes no sense to consider the brain separately: cognition takes place within a body, which is embedded in an environment. We need to study the system as a whole; the brain's functioning exploits regularities in its envi-onment, including the rest of its body. Under the embodied cognition approach, robotics, rision, and other sensors become central, not peripheral.
Overall, Dreyfus saw areas where AI did not have complete answers and said that Al is herefore impossible; we now see many of these same areas undergoing continued research nd development leading to increased capability, not impossibility.
"GOFAI" is NOT an NPOV term for "the dominant paradigm of AI research from 1956 to 1995", instead it is pejorative, and not technically correct. Haugeland's book came out in 1986, and so what he called GOFAI does not fairly describe what came after.
Veritas Aeterna ( talk) 23:37, 3 July 2023 (UTC)
The article said, in the section about "rule based systems":
Haugeland's GOFAI was not strictly rule-based systems, but any system that used high level symbols to represent knowledge or mental states or thoughts or produce intelligent behavior.
Haugeland's GOFAI is any work that assumes:
1. our ability to deal with things intelligently is due to our capacity to think about them reasonably (including sub-conscious thinking); and
2. our capacity to think about things reasonably amounts to a faculty for internal “automatic” symbol manipulation.— Artificial Intelligence: The Very Idea, pg. 113
This is basically a form of the physical symbol systems hypothesis (with some fine-tuning only of interest to philosophers). If you're more familiar with the PSSH than you are with Haugeland, you can take GOFAI to mean a "physical symbol system".
For Haugeland, GOFAI is more of a philosophical position than a branch of AI. It's an assumption that's implicit in symbolic AI projects, especially when they start making predictions, or assuming that symbolic AI is all you need for intelligent behavior. So, if we're to take it as a branch of AI, it has to be any AI research that (a) uses symbols, and (b) thinks that's enough.
So, anyway, Haugeland doesn't belong in this section, because his use of the term is slightly different than the definition used most often today: "the dominant paradigm of AI research from 1956 to around 1995", and is definitely directed towards all symbolic AI, not just rule-based systems.
I should probably add this material to the article. --- CharlesTGillingham ( talk) 07:18, 3 July 2023 (UTC)
First of all, Dreyfus' critique was first published in 1965, before rule based systems existed.
He directly criticized the work that had been done by 1965, especially the "cognitive simulation" program of research at CMU. (I.e., Newell and Simon), and he harshly criticized Simon's public claims to have created a "mind" in 1956 and the nutty predictions he made in the 60s. So all the vitriolic stuff was definitely not just rule based systems. It was AI in the middle 1960s.
The better part is his four assumptions. His target is clearly "symbols manipulated by formal rules". The "formal rules" here are the instructions for moving the symbols around and making new ones, i.e., a computer program. They are not production rules (production rules weren't invented until ten years later).
There's nothing in his work that suggests he was talking about production-rule-based systems exclusively, or that his critique didn't apply to any work that manipulated symbols according to instructions.
On R&N (2021). They mention three things that aren't targets of Dreyfus' critique: (1) "Subsymbolic" stuff. Sure. Dreyfus himself says this. (2) Probabilistic models. It's true that this addresses the qualification problem, but it can't come close to solving the general problem. It's also true that in soft computing, the fuzziness overcomes the limits of symbolic AI. I'm not sure what he would say about this. (3) In the last line, they throw in "learning". Dreyfus didn't say symbolic AI can't learn or specifically he was only talking about things that can't learn. Samuel's Checkers was around, and there was a big dust up with Dreyfus over chess programs (Mac Hack, specifically), so I have to assume he was aware of it. There's no reason to assume he was somehow ignoring it.
I realize that R&N is the most reliable source we have, but in this case, I think Dreyfus is more reliable when we're talking about Dreyfus.
I think R&N missed an important point. Dreyfus' critique only applies to people who think that symbolic AI is sufficient for intelligence, so it doesn't apply to neurosymbolic mashups, or to "narrow" uses of symbolic AI. Dreyfus never said symbolic AI was useless. He said it couldn't do all the intelligent things people do. ---- CharlesTGillingham ( talk) 08:34, 3 July 2023 (UTC)
I agree with you that AI "couldn't do all the intelligent things people do." and still can't by itself. Neither can Deep Learning. There will be a synthesis.
Veritas Aeterna ( talk) 23:53, 3 July 2023 (UTC)
I added a section about Haugeland's original use of "GOFAI" and what he was talking about.
I cut all the material that was based on mistaken idea that GOFAI referred only to production-rule reasoning systems. There was a lot. ---- CharlesTGillingham ( talk) 12:28, 3 July 2023 (UTC)
Let me know your ideas at this point, e.g., if you'd like to add a section more closely hewing to Haugeland's intentions, versus common use now.
I will add more sources justifying that symbolic AI is not well characterized by GOFAI, too.
Veritas Aeterna ( talk) 20:31, 3 July 2023 (UTC)
I get the feeling you're not reading what I wrote above.
It is literally impossible that the "rules" Dreyfus is referring to are production rules. Production rules had not yet been invented in 1965 when Dreyfus first published his critique. They are "instructions for manipulating symbols" -- that is, a computer program.
It also literally impossible that Dreyfus' critique applies only to production-rule systems of the 1980s. Production rule systems of the 1980s did not exist in 1965 when Dreyfus first published his critique. It is directly addressed to AI before 1965, because that when it was written.
R&N does not dispute this, except maybe in the snarky final joke of the quote. I don't think they intended this joke to be taken seriously. ---- CharlesTGillingham ( talk) 21:59, 3 July 2023 (UTC)
You've defined "GOFAI" as "a restricted kind of symbolic AI, namely rule-based or logical agents. This approach was popular in the 1980s, especially as an approach to implementing expert systems."
The Cambridge Handbook of Artificial Intelligence:
The Stanford Encyclopedia of Philosophy, "The logic of action":
These are reliable sources. They define GOFAI as symbolic AI.
They don't restrict it the 80s. They don't tie it production rules (the technique behind expert systems).
Haugeland coined the term to describe programs that were "formal symbols manipulated by a set of formal rules". The "rules" here is like the "rules of chess" -- the rules governing the formal system. They are not production rules. Dreyfus (and every other philosopher talking about computationalism) is working from this definition as well.
R&N define the term as "a system that reasons logically from a set of facts and rules describing the domain". It does not mention the 80s and it doesn't explicitly say "production rules". This could equally describe McCarthy's work on logic in the late 60s.
If the definition of the term is in dispute, then, WP:NPOV we need to give each of the reliable sources a voice. We need to cover the common definition, philosophy's definition, and (if you insist) R&N's definition. ---- CharlesTGillingham ( talk) 22:31, 3 July 2023 (UTC)
See WP:coatrack. This topic should be discussed in (short) subsection of symbolic AI ---- CharlesTGillingham ( talk) 22:40, 3 July 2023 (UTC)
I've tagged the article. Note there is a "detailed complaint on the talk page" per WP:DETAG.
After you've replied, I will add the Cambridge Handbook definition at the top, mark R&N's definition as specific to them, and re-add the section describing Haugeland's use of the term. I will also tag other sections that are off-topic or unclear and leave them to you. I also leave it to you deal with the coatrack issue. See below. ----
CharlesTGillingham (
talk)
23:03, 3 July 2023 (UTC)
I have a solution that I think will satisfy both of us. We split our coverage of "symbolic AI" into two parts
We put the first in symbolic AI, we put the second in GOFAI. Specifically:
Does this seem good to you? ---- CharlesTGillingham ( talk) 16:25, 4 July 2023 (UTC)
The technology they criticized came to be called Good Old-Fashioned AI (GOFAI). GOFAI corresponds to the simplest logical agent design described ... and we saw ... that it is indeed difficult to capture every contingency of appropriate behavior in a set of necessary and sufficient logical rules; we called that the qualification problem.