I consider it necessary to answer here a question asked in the beginning of the dispute over "artificial consciousness" article, because due to the dispute my answer may remain unclear.
"To say Artificial Consciousness is not Consciousness is simply to define Consciousness as being something human beings cannot build. If "it", whatever "it" is, is built by humans, then by definition it would not be conscious." ( Paul Beardsell)
Yes, I said that artificial consciousness is not consciousness and I insist it, considering that we determine consciousness as the totality of a person's thoughts and feelings. This is the most general description of consciousness, the more narrow definitions supposed to be used in specific context, but it may again be understood differently by different people. That way we determine consciousness through human abilities, to be a totality of human abilities. And this may remain so, as we measure consciousness through our own abilities. As consciousness is subjective (Searle etc, feelings), then we can never determine if there is an equivalent to it. So if there is something similar to human consciousness in other than human, then we should call it with some other name, not just consciousness. But there will always be many subjective abilities or aspects like certain feelings etc even because there will always be new conditions to what different people react differently and therefore understand related ideas or experiences differently, what means that these ideas or experiences will be subjective. And we cannot build a machine to fully satisfy subjective concepts, at least just because people can never determine whether such machine finally does what it supposed to do or not. Therefore machine is something made by humans based on what they objectively know, not even anything what theoretically can be emulated by algorithm. So yes, by that if a machine is built by humans, then by definition it cannot be conscious. I don't know any AC effort to build a machine equivalent to human, so it is not commonly considered that AC must be equivalent to consciousness. But in spite I don't agree with that, I was not against including opposite point of view into the article. Tkorrovi 20 Mar 2004
So is the human being only a machine or is it something more than a machine? Paul Beardsell 11:32, 22 Mar 2004 (UTC)
By
Occam's Razor, the simplest explanation consistent with the facts is likely to be the correct one, and the
Copernican principle, that no special or priveleged position should unnecessarily be given to any part of the problem,
Artificial Consciousness will be real consciousness. The Church Turing thesis says we need new physics before two computing machines are different, by Occam's Razor we should not posit new physics without good reason. By the Copernican principle we should claim no special position for human beings without good reason. The only good reasons we have are arrogant ones: Humans are too complicated, too special, too something for their brains to be built or copied artificially. Surely, here you are correct: we have lots to learn, we learn more all the time, many things are possible, each POV must be in the article. But where you are wrong, if you hold this view, is that each POV has equal merit. No, the approach consistent with the
scientific method says: Artificial consciousness is likely to be real consciousness, by Occam's Razor and the Copernican principle. And that will remain the most likely true POV until contradictory evidence is discovered.
Paul Beardsell 16:43, 22 Mar 2004 (UTC)
If I remember correctly, Occam's Razor did fight against catholicism with this argument. The problem here is that the concept that artificial consciousness is equivalent to consciousness is also not the simplest solution, we don't know everything about consciousness and so making artificial consciousness in that way would be not only much more complicated, but unfeasible task. So concerning these two, the second is much simpler approach to build AC, and so also maybe the only meaningful approach for AC in general. OK at least, none of the views are proved wrong by this argument either. And it's simpler for us to just write what different views there can be. Tkorrovi 22 Mar 2004
Copernicus had to keep his head down when it came to the Church too. The Catholic Church's POV is, of course, that there is a magic spark. Paul Beardsell 17:21, 22 Mar 2004 (UTC)
I don't agree that "The approach consistent with the scientific method says: Artificial consciousness is likely to be real consciousness", there is not only one approach in scientific method also. What then about Chalmers, accordance to who a simple awareness, like that of thermostat, can considered to be artificial consciousness? Why not just leave the different approaches in the article, without charging how "equal" they are. (BTW I don't agree with Chalmers version or so-called "Weak AC" either.) I think there is no "magic spark", just the things we don't know yet, whatever they are then. And your example of Copernicus reminded me some other example, what if Galileo did say that the Earth orbits around the Sun and Sun orbits around the Earth are both true, instead of insisting that only the first case is true? He might still been right because in accordance with general relativity we can look at the things from any point and the equations still describe the things correctly. Tkorrovi 22 Mar 2004
There is only one scientific method. Two scientists can take two different approaches to solving the same problem, and each approach can be consistent with the scientific method. It is a methodology, not a recipe.
Occam's Razor does not say there is only one correct way of explaining something, it says do not bother with the more complicated way when the simpler way accords with all the known facts. Of course, we now know that Galileo was wrong: The earth and the sun revolve around their common centre of gravity. That is the Einsteinian view also. Special relativity allows you any location and any linear velocity, but angular velocities (being acceleration) are NOT relative.
Paul Beardsell 18:04, 22 Mar 2004 (UTC)
OK, and it's not completely proved what is the simpler way for AC either. Tkorrovi 22 Mar 2004
Well, if strong AC is shown to be impossible that will mean new physics (Penrose), or the existence of the magic spark (Catholic Church), or at least the Church Turing thesis to be shown wrong (OK, here you have a fighting chance but don't bet on it). The simpler way has to be no new science or religious revelation. Well, that is what Mr Occam says. Paul Beardsell 18:26, 22 Mar 2004 (UTC)
I think nothing is so dramatic, if strong AC is shown to be impossible then there are just thing what we yet don't know, not even necessarily very different from what we know. But why bother with finding out whether strong AC is correct or not, just include it together with other views. But this is interesting philosophical problem, and most of them are not solved, some are even kind of "eternal". Penrose said consciousness is non-computable, so in accordance to him there couldn't be no AC and almost no AI either. Tkorrovi 22 Mar 2004
Penrose proposes a what physicists consider a REVOLUTION in physics to support his view. Penrose is a mathematician, not a physicist. The position is every bit as dramatic as I state. I agree, MAYBE strong AC is impossible but IF SO either (i) there will be new physics, (ii) or there will be metaphysical/religious revelation or (iii) Church-Turing thesis is wrong. This is NOT a matter of opinion, but of fact. We do not know about strong AC (as it does not yet verifiably exist) but if it is SHOWN TO BE IMPOSSIBLE one of the three alternatives is required. Or (4) Possibly this might be one of those problems to which we will never know the truth or (5) possibly consciousness does not exist at all, not even in humans, we are unconsciously deluded. Paul Beardsell 19:00, 22 Mar 2004 (UTC)
Sorry, but doesn't the argument of Penrose that consciousness is non-computable already say that strong AC is impossible? I don't agree with that argument and I also don't see a need for anything non-computable (whether then soul or whatever else). But this is again a matter of views, some scientists agree with Penrose, some don't, but in the article about the matter all views must be included. Tkorrovi 22 Mar 2004
And then, wouldn't it be better to concentrate from tremendous philosopical and scientific problems on how to write the article, just include all the views there are and that's it. I'm by far not against discussing, but we may not go much forward that way. Tkorrovi 22 Mar 2004
Yes, but you started this section of this page to address the question "Is AC equivalent to consciousness or not?" I am simply staying on topic. It seems to me that you must have a view: Is the human being a machine or not? According to tkorrovi, what is correct? If you say yes, OK, we are just machines, then I ask what is special about the type of machine that we are that other machines cannot be properly conscious. If you say no, obviously we are more than machines, then you are saying that true consciousness depends on some magic spark or, if you prefer, it is a gift of God. And that would explain your belief that AC can not be true consciousness. If you are undecided then I suggest that your belief might be prematurely held. Paul Beardsell 09:33, 26 Mar 2004 (UTC)
I do not believe the definition of consciousness is yet right. No one has discussed how artificial consciousness is to embody thought. Artificial thought? Has anyone thought about this? Matt Stan 11:41, 22 Mar 2004 (UTC)
Perhaps general thought is the preserve of (artificial) intelligence: Proving a theorem does not require consciousness (I suggest). Whereas reflexive thought, that which humans, dogs and thermostats do all the time, is a preserve of (artificial) consciousness.
Paul Beardsell 14:30, 22 Mar 2004 (UTC)
It's good that you noticed that not everything requires consciousness. Therefore my opinion is that consciousness is a totality of all the abilities, only all mental abilities of an average human together give something what we call consciousness (and feel like consciousness). Except some special cases in restricted context (patient is considered conscious when he blinks his eye). Tkorrovi 22 Mar 2004
At the village pump tkorrovi asked what I thought about this version. And I replied I would comment here.
I think that a lot of work has gone into it and in some important ways it is better than the main article. I also think that the main article is better than this one in some important ways.
I can spot some obvious minor flaws (e.g. grammar, wording) here. I also can spot one or two larger mistakes made when tkorrovi made what is an obviously honest and well-meaning attempt to incorporate views that he himself does not hold. Experience tells me that correcting these errors here might be problematic.
I want to incorporate some of the main article's talk page into the article itself. Then the two pages need merging.
Paul Beardsell 14:23, 22 Mar 2004 (UTC)
OK then, thank you, prepare it here before merging. Sure it needs work. I have some hope that we may agree. Not so many people interested in this article anyway, so when we the only ones who talk about it don't agree, this would be highly unreasonable -- we would create weakness where there could be strength. Tkorrovi 22 Mar 2004
I think a merger needs to be done quickly and might not be perfect. If it is not done quickly then for a time we still have multiple versions which then allows for further differences to occur. I think we must allow for temporary reductions of quality and even loss of some content. Sometimes going forward must allow for the occasional backward step. We will soon recover from any mistakes made. What I think you are suggesting is that there must be a consensus to have a new version, that you would still like a veto. Paul Beardsell 16:17, 22 Mar 2004 (UTC)
Yes it's better to reach consensus in discussion, at least in the most important thing -- how to organise the article. I suggest the same way as in NPOV version with the comment in the beginning, that views must be separated, included. Because it's clear that there are different views what remain opposite, like the "strong AC" and "weaker" AC schools of thought. Tkorrovi 22 Mar 2004
And instead of merging (or as a way of merging) I suggest adding everything what is considered to be missing into this version, and then just replace the main article with this version. I think it would be much easier to do it that way. What you think? Tkorrovi 22 Mar 2004
If you bring everything accross to this page and delete the current version, renaming this, the edit history will be lost. That is not necessarily a bad thing, but it is not my preference. A way around this is not to rename but copy'n'paste back onto Artificial consciousness from here.
Can I also suggest, if you are going to take this huge task on, that you bring over everything to here, word for word, without editing, and only edit when you get here. Then the change log of the merge exercise will all be in one place.
I think once it all is one place that possibly some of the merging work can be shared, if we are careful, but if you would prefer to have a go at it first that's fine by me, as long as we can tell from the log what has happened, and so I can revert you no more than three times. (Joke!) Paul Beardsell 17:33, 22 Mar 2004 (UTC)
No, I exactly thought that we make the changes here, and then copy and paste the entire version into the main article. Edit history would not be lost then. But bringing over of what you talk about? I brought over everything from main article into NPOV version what I considered necessary, there was more but if I didn't include it, then it was just because I (and Matt Stan also) would like the article to be a little shorter. I don't want to bring more, but you feel free to do so, if you consider it necessary. I made a few spelling and grammar corrections to what I included from main article, please compare these paragraphs to these in the main article, and change them (or revert my changes) if it is not the way you like. In particular, I changed a bit the wording of "Strong AC" argument, do you agree that it is much more clearly said that way? Tkorrovi 22 Mar 2004
But a problem was caused. You brought accross things but, in one or two areas, misinterpreted what someone else said. That may have been their fault, not yours, as they might not have been clear in what they wrote. The difficulty is that Wikipedia will not let you compare two different articles to find out what the edit was that you did to the text while it was in transit from one to the other. Please, bring it all over as is, possibly at the end of the article, and save that version BEFORE editing. Then cull, cut, reinterprete, because the author of the corrupted paragraph can then see what has happened. No article should be longer than necessary. But it is sometimes necessary to get longer before getting shorter, as the bishop said to the actress. Paul Beardsell 18:12, 22 Mar 2004 (UTC)
OK, I may do so, but for this discussion it's important to know what exactly you consider that I misinterpreted? I just want to know, it may be important for editing. I didn't want to misinterprete anything, but different people always understand things slightly in different ways, this is why it's better when several people look at the text, one notices what other doesn't. Tkorrovi 22 Mar 2004
Would it be worth summarising what the issues are about this topic? I'll have a go, and perhaps we can reach some consensus:
1) The epimestological question of whether artificial consciousness is possible or whether the term is an oxymoron, i.e. that by definition consciousness cannot be artificial because it wouldn't then be consciousness at all. To get around this, we either have to remove the need for thought from the definition of consciousness or change the title of the piece to simulated concsiousness.
2) The question of whether consciousness or indeed artificial consciousness necessarily requires a predictive capability, as suggested in the original article. Evidence from alternative sources should be provided to justify the original claim, and I have suggested that the alternative of anticipation should be included to cover his requirement.
3) The question of whether it is possible to define an average human for the purposes of setting criteria against which to measure the capabilities of an artificially conscious machine. No attempt has been made to indicate what this average is, and I have suggested that even a totally paralysed person or a highly mentally retarded person is still deemed to be conscious by humane people. I woukld add that a newborn baby or an Altzeimer's sufferer are also both conscious, although the latter probably in an impaired way.
4) the question of whether merely the ability to demonstrate consciousness of some phenomenon should be deemed as consciousness (consciousness in the transitive sense) or whether consciousness is absolute and doesn't require its experiencer to be conscious of anything in particular in order to be conscious (consciousness in the intransitive sense). If we accept that any inanimate object that is used to engineer some outcome is itself conscious by virtue of its function, then there isn't really anything to artificial consciousness and it could simply be defined as anything instrumental in achieving some end.
5) The question of reliable academic sources to back up claims made about a technical subject for the purposes of its entry in an encyclopedia, which I haven't seen any evidence of yet.
Matt Stan 18:21, 22 Mar 2004 (UTC)
That's a very good approach to take and it needs some conscious attention, it being 2:30AM here I will be back later. Paul Beardsell 18:37, 22 Mar 2004 (UTC)
[1] The term is bad but it was coined by others like Igor Aleksander and it is not for us to change it. You may start "simulated consciousness" page of your own, this may be even better term, but unfortunately is not much accepted term. But the term is just term, it must be defined and this determines the meaning. It does not necessarily have to mean *artificial* *consciousness*. it may also mean simulating of consciousness by artificial means, and this is not oxymoron. Whether to remove thought is another question, we may also simulate everything what we can simulate about thought. But by some views there may be need to exclude it, all these views must be included in the article.
[2] The article by Igor Alexander where predictive capability is considered as one requirment for AC is included to NPOV version. For NPOV all requirements what may be considered necessary should be listed, including anticipation, awareness etc, but in addition to predictive capability, not to delete one requirement because other requirement is included.
[3] What means average person is more or less self-evident for most of the people. They are considered conscious in other context (medical, whether person can move his body or not). People often don't say that mentally retarded person has a consciousness of average human. New-born baby is another question, this is again a matter of views, but it is likely more than any artificial consciousness, in a sense that by learning it can mostly achieve all abilities and aspects of consciousness of average human.
[4] These are the term "consciousness" defined to be used for specific contexts again. One view is to proceed from the most general definition, and this demands almost all mental abilities of average person to be present for it to qualify to have consciousness.
[5] Of course sources must be included, but as term is used, and also in some sense important, it qualifies the entry into encyclopedia much more than some other subject.
And maybe it's better to discuss a bit more slowly, there would not be enough quality of such discussion this way Nothing wrong in asking 5 questions at once, but is it always the best.
Tkorrovi 22 Mar 2004
A link "lectures by Igor Aleksander" to show that the term "artificial consciousness" has been used in scientific context http://www.i-c-r.org.uk/lectures/spr2000/aleksander13may2000.htm Tkorrovi 22 Mar 2004
Also see http://www.ph.tn.tudelft.nl/People/bob/papers/conscious_99.html
Thank you indeed Matthew for the paper.
About oxymoron. I talked to several people, including some PhD-s about artificial consciousness and not all consider artificial consciousness just a nonsense. And then again, some scientists consider for example consciousness studies (and everything related) nonsense as well, this is a matter of views again. So if you don't want that anybody considers what you do nonsense, then don't work on anythink what is related to consciousness. At the same time artificial consciousness is likely to be an important link between consciousness and AI. What most of the people I talked to say though is that the term "artificial consciousness" is somewhat misleading because of the words used. Without knowing any definition or anything about it, the first association would be human consciousness built artificially, or even consciousness to replace natural consciousness (very bad meaning). Many people don't like the idea that consciousness can be made by artificial means and think that it must be some cranky effort to build an artificial human. Without definition one cannot realize that a mere simulation of conscious abilities, to be as close to natural abilities as we can get based on our knowlegde of the subject, are meant. Some efforts also involve simulating artificially certain feelings (emotions). Some are the systems intended to be unrestricted enough to be used in enabling the development necessary to achieve certain abilities of consciousness like prediction, or then imagination by enabling creation of different alternatives in certain circumstances. But these are systems not so immensely complicated (often also not easy though), at least very very far from any artificial thinking at the level of the human. So yes, the term is bad and misleading, "simulated consciousness" or similar may be much better. But it was started to use the term "artificial consciousness" in scientific context, and my opinion is that it's not for us to change it. But if you think so, feel free to create "simulated consciousness" article, but "artificial consciousness" article must remain, because this term is in use. Maybe it must be written that some people think it's nonsense, but then to AI also, because some people think that this is nonsene as well. Maybe once AI article was edited by people who thought that it's a failed field (have such impression when I read the older entries), but later people who remained to edit it were people who didn't think so. Comparing AC to AI, AC is by far less significant of course. These were my somewhat random thoughts about the subject. Tkorrovi 22 Mar 2004
Matthew, notice that thoughts, and even feelings were included in the definition of consciousness in the paper you presented. What I don't like though is using the word "soul". Even if it has a strictly defined and objective meaning, I think that it's not right to use such word in scientific context, as it comes from religion or belief. There is such a variety of different ideas and interpretations concerning artificial consciousness, artefactual consciousness, simulated consciousness etc, so that the only possibility is to write different views separately, there is no general consensus about that in science yet, but still the research is being done. Tkorrovi 22 Mar 2004
I am puzzled about the idea of consciousness being associatd with prediction. I thought that perhaps it meant anticipation in the short term, i.e. immediate cogent reaction to imagined possible events (including internal events as might emanate from thought processes). Can anyone explain, in relation to consciousness, what is being predicted and by whom, and why this is thought to be an essential component of consciousness? Matt Stan 08:49, 25 Mar 2004 (UTC)
In accordance with my Concise Oxford Dictionary, "anticipate" in the wider sense means "foresee", "regard as probable" etc, so it means the same as "predict" ("foretell"). The difference of the word "anticipate" is that it has a narrower meaning "deal with before the proper time". If you talk about immediate reaction to imagined events, then you most likely consider that meaning. No, "predict" is not used in that sense in AC. In the paper I added to NPOV version, Igor Aleksander talks about "Ability to predict changes that result from action depictively". It's also said in paper by Rod Goodman and Owen Holland www.rodgoodman.ws/pdf/DARPA.2.pdf that "Good control requires the ability both to predict events, and to exploit those predictions". Why we need to predict changes what result from action is that we can then compare them with the events what really happened, what enables us to control the environment and ourselves (ie act so that we can predict the results of our action). This is also important for training AC -- the system tries to predict an outside event, and if this event indeed happens, then that gives it a positive signal. What is necessary for that is imagination, ie generating all relevant possibilities for certain case, for what the system must be very unrestricted. And what is also necessary is some sort of "natural selection" so that only these models (processes) survive, what fit in their environment. So the events are imagined not in order to react to them immediately, but they are stored to exploit them later, the time when the predicted outside event should occur. Tkorrovi 18:50, 25 Mar 2004 (UTC)
This hinges on the word "necessary". Anticipation is a very useful, desirable attribute for a conscious being to have. But that does not mean it is a necessary attribute of consciousness. I agree with tkorrovi about all the advantages of anticipation, just not that it is necessary. That it is necessary has not been shown. Desirable, yes. Useful, yes. Necessary, no. Therefore that is supposedly necessary does not merit a prominent, headline, first definition position in the article. Paul Beardsell 09:13, 26 Mar 2004 (UTC)
If a human is passive then it can be conscious. So, "passive animate objects" can be conscious. If all "inanimate objects" can not be conscious then the big question is answered and, in my view, we can go home. So, passiveness disqualifies an inanimate object from consciousness but not an animate one. Which is just too blatant an adoption of a priveleged position to be allowed.
But only one of many. Luckily, for my argument, the thermostat is not passive.
Paul Beardsell 09:06, 26 Mar 2004 (UTC)
No. Nonsense. Any word now seems to mean consciousness. Or not. When the word is used in relation to a human then it means consciousness. When the same word is used in relation to a machine then it means not consciousness. Or the word is inadmissabable because, why? It's a machine! Matthew, it is not an unreasonable prejudice to have, but it is unreasonable not to recognise it as a prejudice: You as good as define consciousness as something only a human (or possibly some higher animals) can have. I put you to the same test I put Tkorrovi: Magic spark or new physics? Paul Beardsell 17:15, 26 Mar 2004 (UTC)
My method can be summarised as follows:
Is that such a controversial approach? We are still, unfortunately, bogged down at Step 1, or at least some of us are!
Final point here: the use of artificial (or simulated, which effectively comes down to the same thing, from earlier discussion) merely denotes the idea of consciousness being made by means other than the natural means by which consciousness usually arises. Any attempt at artificial consciousness deployment that fails the tests will by definition preclude the artifact under test from being deemed conscious (or artifically conscious, if people wish to make that distinction), i.e. from possessing (artificial) consciousness.
How's that? Can we move forward now? Have you still got your Lego set [1], and the time and money to become the pioneer. I will award a prize of one anonymous Wikipedia log in to the person who comes up with the first implementation. (See User talk:Paul Beardsell for decryption of the last sentence.) Matt Stan 19:01, 26 Mar 2004 (UTC)
Whether or not I am in a double-bind in another discussion has no impact here. Or should not! You are seemingly irritated with me pointing out contradictory use of language and asking you basic relevant questions which you do not address. I suggest this is because you are reluctant to challenge your own fundamental beliefs. :-)
When you assert that simulated and artificial are the same you must recognise that this does firmly peg you into the "artificial consciousness can never be real consciousness" school. You insist on a recipe for consciousness which implies a set of values which makes consciousness human-like: So you are firmly pegged in that school too. These are perfectly reasonable if anthropomorphic views to hold. But you seem to deny the admissibility of other views.
What if (real) consciousness could be built, but not one that was sufficiently human-like to pass your tests? That would be a real tragedy: Refusing to recognise a possibly rich otherness.
After that can I buy you a beer sometime between 9 April and 1 May? Of course!
Paul Beardsell 04:40, 27 Mar 2004 (UTC)
Microbotics Domotics Domobot Digital pet Tamagotchi Neopets
And so, on to my next question. What will it be for, assuming we are talking about engineering an artifact, the exact requirements for which have not all been defined? It could end up an interesting curiosity, that might even warrant putting in a thermostat alongside it in the telling of the story of how it came about. Or what else could it be? I'm suggesting that if we made the initial criteria for passing the first test, then we would have achieved something which could be improved, and the notion of giving it heuristrics of its own opens endless possibilities by which it might far exceed the constraints of mere human consciousness. As for for being accused of anthropomorphism, I am not arguing necessarily that the model should be a baby, just that it should be considered, and alternatives proposed. And that for us to be able to say that AC exists, rather than just being an idea on a discussion page, then we need some artifact to mention. At the moment the only other candidate we have is a thermostat. I don't see why we should be averse to the idea of using ourselves as our model for something we only understand. dogbot If we are talking about a consciousness that is other then I suggest we switch to entanglement theory and the idea that the future can alter the past, and see whether the patterns that are observible in the universe manifest what one might construe as a godly consciousness. But that would be neither simulated nor artificial. Please remind me of the qualities of this otherness. What should I read (or re-read) in order to gain its appreciation? Matt Stan 08:43, 27 Mar 2004 (UTC)
Matt Stan 11:09, 27 Mar 2004 (UTC)
While I attempt to craft a more thoughtful response, this struck me after my last contribution:
One view is that AC will not be real because we are too dumb to build real C. This is a defeatist view which tempts us to give up before we start, but maybe it is a realistic view. AC which is really C might be so otherly (is that a word, I asked), othernessly (also no good). Otherworldly! There is my example. Should we be visited by aliens how will we test that they are conscious? Easy! After testing their intelligence using the Turing test we will test their consciousness with the Stannard test. We will test these bilaterally symetrical, bipedal, two-eared, swivelly-eyed aliens with our anthropomorphic (both meanings) tests!
If we assume aliens exist (at least for this argument) we have no good reason to expect aliens to be bipedal or even to breathe air. Yet we would expect (some) aliens to be conscious, I suggest. But that consciousness is less likely to be human-like, I suggest, than their locomotion is to be bipedal.
By this thought experiment I hope to have established that consciousness of a non-human type is possible or, depending on your cosmic view, likely.
Paul Beardsell 11:20, 27 Mar 2004 (UTC)
As to the quantum entanglement point: Certainly it is not me who seems to want to invoke new science or ignore old science: I have been pointing out that the existing science indicates there is no obstacle to AC being real. Paul Beardsell 11:20, 27 Mar 2004 (UTC)
I'm still not clear on the issue you take with my notions about artificial vs simulated. I had intended that they should refer to the same thing. but was pointing out that artificial consciousness is oxymoronic because once AC is achieved it ceases to be artificial and becomes real, whereas simulated consciousness can be as real-like as we make it and no semantic problems arise. Are you suggesting that simulated concsiousness is actually something different, which I haven't taken into account?
I was also indicating that the test should be that humans should be the judge. I was not stipulating what the business requirements are, but putting forward a set that might meet the test requirement. You might come up with a philosophic argument that identifies an artifact as conscious, as per that philosophic argument, but that would not count in the popular view as consciousness. When the aliens come, will they expected to give us logical proofs of their consciousness (to help us decide whether their consciousness, such as it is, is real or artificial?), or will it just remain a human perception as to whether they are or not? The less like us that they are the more difficult it might be judge, but I am maintaining that ultimately we can only judge by what we consider to be consciousness, based on our own experience. Therefore consciousness is necessarily anthropomorphically defined. And it has to interact with humans at some level in order to be tested. Argument from ignorance Anthropic principle Matt Stan 11:56, 27 Mar 2004 (UTC)
You raise two points I can readily address: The definition of artificial and the anthropic principle.
Back when there was no assisted locomotion other than that provided by animals had the concept of artificial locomotion been discussed some might have held that it was impossible. That any locomotion so achieved would be simulated, not real. And therefore simulated and artificial are synonyms. But they would have been wrong. I suggest that we stick to the dictionary definition: artificial - made by man or otherwise assembled, not arising naturally.
Formally: If A and B are both properties of X then it does not follow that A is B or that A is a subset of B or vice versa. A and B can be disjoint, distinct. Let X be "consciousness", let A be "simulated", and let B be "artificial".
And had the test of detecting something to be locomotion been that its propulsion must be similar to the locomotion known, by legs, then the motorcycle would have failed the test. That consciousness must be tested against and by humans is your assertion. That it can be so tested, I agree. But a more objective test would be useful. You seem to use the fact that we are conscious as a handicap to us recognising consciousness elsewhere. I am not a snail yet I can recognise a snail. If I were a snail recognising a snail surely would be easier, not more difficult? Let us decide on a simple, non-anthropomorphic (I believe I have shown this is necessary when dealing with aliens) definition of consciousness, and procede from there.
The anthropic principle is where you go when forced. It is not supposed to be your first refuge.
Paul Beardsell 12:27, 27 Mar 2004 (UTC)
Damn! I wish I had used flight not locomotion. Then I would characterise your argument as saying that for something to be flying it must have flapping wings. No, I would say, let's look at the definition of flight. Here, too, I say, let's look at the definition of consciousness. Is it self-aware? Then it is conscious. Is it man-made? Then it is artificial. Paul Beardsell 12:43, 27 Mar 2004 (UTC)
I'm happy to stick to the dictionary definition: artificial - made by man or otherwise assembled, not arising naturally, though simulate is defined as: Imitate the conditions of (a situation or process); spec. produce a computer model of (a process). (SOED mid-20th Century usage). If the AC machine were to contribute to its own consciousness by virtue of its heuristics, would that instance of its consciousness be artificial, or could it be said to have naturally arisen as a result of it having received a wake-up call from an external source?. I think simulated indicates a more robust approach, and of course is less anthropomorphic than artificial. Matt Stan 13:57, 27 Mar 2004 (UTC)
Have we been round this loop yet? Self-awareness implies to me the notion of the self that is aware and that which it is aware of, and allows this for stimuli arising in its internal environment, but it leaves out the idea of the external environment, i.e. awareness that is not self-awareness. That it is aware of either is determined by it's paying attention to one or other (or both). Therefore, this self-awareness is subsumed within attentiveness - it is just one part of it. The notion of the self that is aware/attentive is an important prerequisite though. Perhaps self-awareness is the wrong term for the fundamental characteristic of consciousness and should be replaced with awareness of environment, where environment includes input from external sources via senses and input from internal resources such as memory.
Also see [Consciousness-only].
About awareness http://tkorrovi.proboards16.com/index.cgi?board=general&action=display&num=1080491783 Tkorrovi 16:20, 28 Mar 2004 (UTC)
You are still using the term average human, but I maintain there is in any event no such thing, and that a human who manifests the minimum requirements of consciousness is nevertheless conscious. Therefore we should aim in the first instance that an AC implementation emulates the minimum requirements rather than any notional average. The problem is hard enough without making it more difficult unnecessarily. Matt Stan 21:35, 28 Mar 2004 (UTC)
I was interested to read in the articles starting with Artificial intelligence (which, incidentally, cover much of the ground we have been attempting to cover here) that one of the commentators had indicated that a body is an essential prerequisite for digital sentience. I need to go back to those articles to resolve what in effect are the intersections/distinctions between AC and other forms of artificial humanity (or whatever we want to call it), but I pose here the question as to whether the artifact that we are postulating for the purposes of proving the existence of AC must necessarily have some robotic element, i.e. that it cannot be entirely entirely absorbed in self-awareness, or put rather more crudely, onanistic. For example, even if we decided not to build a mechanical robot to demonstrate AC, the representation of an image on a screen, coupled with a camera pointing at whoever was watching that screen, could help to give the AC machine the necessary response mechanisms to be verifiable. Without such, or similar, could it ever be convincing? I suppose I am specifying something very basic, i.e. that the thing needs outputs, in order that we can observe it; and inputs, in order that we can test it. These may not be prerequisites of AC per se, but for the purposes of testability I am suggesting that they are prerequisites of any verifiable implementation. Matt Stan 07:50, 29 Mar 2004 (UTC)
To avoid causing offense I thunk you should say that Hawking understands things better that "you or I". Paul Beardsell 13:47, 29 Mar 2004 (UTC)
I agree about reading the other Wikipedia articles: Consciousness needs tidying up but there is some good stuff there. Paul Beardsell 13:51, 29 Mar 2004 (UTC)
I reckon the body could be simulated but the consciousness be entirely real, even if artificial. The conscious entity would, in this example, be deluded about the existence of its body. Paul Beardsell 13:56, 29 Mar 2004 (UTC)
In this section I suggest we list those attributes of consciousness which are necessary. I.e. If any one of the listed attributes is missing from an entity then the entity is not conscious. Having all these attributes does not necessarily make the entity conscious either!
The conscious entity should know something about its own state. The thermostat knows if it is too cold or too hot.
It should know its own physical limits, it's (real or simulated) body. Insects qualify here. Trivially: A tamper resistant device could be said to have this.
It should understand something about its identity. That it is distinct from other possibly similar objects. Many vertebrates seem to get this right. Trivially: Some devices (e.g. RFID) are accutely aware of their own serial number.
Paul Beardsell 05:10, 30 Mar 2004 (UTC)
My problem here is with know. How do you know if a thermostat knows anything, as distinct from how I know you know anything. In your case, I can ask you, 'How do you know you are too hot?' as opposed to just 'Are you too hot?'. Not so with a thermostat. there is a distinction between knowing and just being - it's an epistemological question that needs addressing in terms of AC entities. Matt Stan 08:08, 30 Mar 2004 (UTC)
When I say I know something you recognise that I am at least superficially similar to you, and that I might, therefore, mean something similar by that term as you do. You have said that you are forced into the anthropic principal in these circumstances when talking about consciousness because of problems like this. Interestingly, when I say I am too hot you know what I mean but you might disagree and think it too cold! When I say it is hot I am not really making a comment only about the temperature: Amongst other things I am referring to is how quickly I am gaining or losing heat. This is a complicated function of several factors: My current metabolic rate (a function itself of how recently I ate, recent exercise etc), the wind speed, the humidity, how I am dressed, whether the heat I receive is from radiation or conduction, etc. When a thermostat "says" it "knows" it is too hot it makes a reliable comment about the temperature. I do not. Yet you allow me "knowledge" about the temperature but you deny it of the thermostat. Essentially, once again, you reserve the word "know" for humans. Fine, say I, what word will you allow for thermostats and I will use that for humans too. Paul Beardsell 08:52, 30 Mar 2004 (UTC)
You know I am too hot because I turned the airconditioning on. You know the thermostat is too hot because it has turned the airconditioning on. Paul Beardsell 09:09, 30 Mar 2004 (UTC)
This page was copied to Talk:artificial consciousness. As NPOV version is merged, we should continue the discussion there. Tkorrovi 12:34, 26 Mar 2004 (UTC)
- The discussion has continued here. Matt Stan 08:10, 27 Mar 2004 (UTC)
I consider it necessary to answer here a question asked in the beginning of the dispute over "artificial consciousness" article, because due to the dispute my answer may remain unclear.
"To say Artificial Consciousness is not Consciousness is simply to define Consciousness as being something human beings cannot build. If "it", whatever "it" is, is built by humans, then by definition it would not be conscious." ( Paul Beardsell)
Yes, I said that artificial consciousness is not consciousness and I insist it, considering that we determine consciousness as the totality of a person's thoughts and feelings. This is the most general description of consciousness, the more narrow definitions supposed to be used in specific context, but it may again be understood differently by different people. That way we determine consciousness through human abilities, to be a totality of human abilities. And this may remain so, as we measure consciousness through our own abilities. As consciousness is subjective (Searle etc, feelings), then we can never determine if there is an equivalent to it. So if there is something similar to human consciousness in other than human, then we should call it with some other name, not just consciousness. But there will always be many subjective abilities or aspects like certain feelings etc even because there will always be new conditions to what different people react differently and therefore understand related ideas or experiences differently, what means that these ideas or experiences will be subjective. And we cannot build a machine to fully satisfy subjective concepts, at least just because people can never determine whether such machine finally does what it supposed to do or not. Therefore machine is something made by humans based on what they objectively know, not even anything what theoretically can be emulated by algorithm. So yes, by that if a machine is built by humans, then by definition it cannot be conscious. I don't know any AC effort to build a machine equivalent to human, so it is not commonly considered that AC must be equivalent to consciousness. But in spite I don't agree with that, I was not against including opposite point of view into the article. Tkorrovi 20 Mar 2004
So is the human being only a machine or is it something more than a machine? Paul Beardsell 11:32, 22 Mar 2004 (UTC)
By
Occam's Razor, the simplest explanation consistent with the facts is likely to be the correct one, and the
Copernican principle, that no special or priveleged position should unnecessarily be given to any part of the problem,
Artificial Consciousness will be real consciousness. The Church Turing thesis says we need new physics before two computing machines are different, by Occam's Razor we should not posit new physics without good reason. By the Copernican principle we should claim no special position for human beings without good reason. The only good reasons we have are arrogant ones: Humans are too complicated, too special, too something for their brains to be built or copied artificially. Surely, here you are correct: we have lots to learn, we learn more all the time, many things are possible, each POV must be in the article. But where you are wrong, if you hold this view, is that each POV has equal merit. No, the approach consistent with the
scientific method says: Artificial consciousness is likely to be real consciousness, by Occam's Razor and the Copernican principle. And that will remain the most likely true POV until contradictory evidence is discovered.
Paul Beardsell 16:43, 22 Mar 2004 (UTC)
If I remember correctly, Occam's Razor did fight against catholicism with this argument. The problem here is that the concept that artificial consciousness is equivalent to consciousness is also not the simplest solution, we don't know everything about consciousness and so making artificial consciousness in that way would be not only much more complicated, but unfeasible task. So concerning these two, the second is much simpler approach to build AC, and so also maybe the only meaningful approach for AC in general. OK at least, none of the views are proved wrong by this argument either. And it's simpler for us to just write what different views there can be. Tkorrovi 22 Mar 2004
Copernicus had to keep his head down when it came to the Church too. The Catholic Church's POV is, of course, that there is a magic spark. Paul Beardsell 17:21, 22 Mar 2004 (UTC)
I don't agree that "The approach consistent with the scientific method says: Artificial consciousness is likely to be real consciousness", there is not only one approach in scientific method also. What then about Chalmers, accordance to who a simple awareness, like that of thermostat, can considered to be artificial consciousness? Why not just leave the different approaches in the article, without charging how "equal" they are. (BTW I don't agree with Chalmers version or so-called "Weak AC" either.) I think there is no "magic spark", just the things we don't know yet, whatever they are then. And your example of Copernicus reminded me some other example, what if Galileo did say that the Earth orbits around the Sun and Sun orbits around the Earth are both true, instead of insisting that only the first case is true? He might still been right because in accordance with general relativity we can look at the things from any point and the equations still describe the things correctly. Tkorrovi 22 Mar 2004
There is only one scientific method. Two scientists can take two different approaches to solving the same problem, and each approach can be consistent with the scientific method. It is a methodology, not a recipe.
Occam's Razor does not say there is only one correct way of explaining something, it says do not bother with the more complicated way when the simpler way accords with all the known facts. Of course, we now know that Galileo was wrong: The earth and the sun revolve around their common centre of gravity. That is the Einsteinian view also. Special relativity allows you any location and any linear velocity, but angular velocities (being acceleration) are NOT relative.
Paul Beardsell 18:04, 22 Mar 2004 (UTC)
OK, and it's not completely proved what is the simpler way for AC either. Tkorrovi 22 Mar 2004
Well, if strong AC is shown to be impossible that will mean new physics (Penrose), or the existence of the magic spark (Catholic Church), or at least the Church Turing thesis to be shown wrong (OK, here you have a fighting chance but don't bet on it). The simpler way has to be no new science or religious revelation. Well, that is what Mr Occam says. Paul Beardsell 18:26, 22 Mar 2004 (UTC)
I think nothing is so dramatic, if strong AC is shown to be impossible then there are just thing what we yet don't know, not even necessarily very different from what we know. But why bother with finding out whether strong AC is correct or not, just include it together with other views. But this is interesting philosophical problem, and most of them are not solved, some are even kind of "eternal". Penrose said consciousness is non-computable, so in accordance to him there couldn't be no AC and almost no AI either. Tkorrovi 22 Mar 2004
Penrose proposes a what physicists consider a REVOLUTION in physics to support his view. Penrose is a mathematician, not a physicist. The position is every bit as dramatic as I state. I agree, MAYBE strong AC is impossible but IF SO either (i) there will be new physics, (ii) or there will be metaphysical/religious revelation or (iii) Church-Turing thesis is wrong. This is NOT a matter of opinion, but of fact. We do not know about strong AC (as it does not yet verifiably exist) but if it is SHOWN TO BE IMPOSSIBLE one of the three alternatives is required. Or (4) Possibly this might be one of those problems to which we will never know the truth or (5) possibly consciousness does not exist at all, not even in humans, we are unconsciously deluded. Paul Beardsell 19:00, 22 Mar 2004 (UTC)
Sorry, but doesn't the argument of Penrose that consciousness is non-computable already say that strong AC is impossible? I don't agree with that argument and I also don't see a need for anything non-computable (whether then soul or whatever else). But this is again a matter of views, some scientists agree with Penrose, some don't, but in the article about the matter all views must be included. Tkorrovi 22 Mar 2004
And then, wouldn't it be better to concentrate from tremendous philosopical and scientific problems on how to write the article, just include all the views there are and that's it. I'm by far not against discussing, but we may not go much forward that way. Tkorrovi 22 Mar 2004
Yes, but you started this section of this page to address the question "Is AC equivalent to consciousness or not?" I am simply staying on topic. It seems to me that you must have a view: Is the human being a machine or not? According to tkorrovi, what is correct? If you say yes, OK, we are just machines, then I ask what is special about the type of machine that we are that other machines cannot be properly conscious. If you say no, obviously we are more than machines, then you are saying that true consciousness depends on some magic spark or, if you prefer, it is a gift of God. And that would explain your belief that AC can not be true consciousness. If you are undecided then I suggest that your belief might be prematurely held. Paul Beardsell 09:33, 26 Mar 2004 (UTC)
I do not believe the definition of consciousness is yet right. No one has discussed how artificial consciousness is to embody thought. Artificial thought? Has anyone thought about this? Matt Stan 11:41, 22 Mar 2004 (UTC)
Perhaps general thought is the preserve of (artificial) intelligence: Proving a theorem does not require consciousness (I suggest). Whereas reflexive thought, that which humans, dogs and thermostats do all the time, is a preserve of (artificial) consciousness.
Paul Beardsell 14:30, 22 Mar 2004 (UTC)
It's good that you noticed that not everything requires consciousness. Therefore my opinion is that consciousness is a totality of all the abilities, only all mental abilities of an average human together give something what we call consciousness (and feel like consciousness). Except some special cases in restricted context (patient is considered conscious when he blinks his eye). Tkorrovi 22 Mar 2004
At the village pump tkorrovi asked what I thought about this version. And I replied I would comment here.
I think that a lot of work has gone into it and in some important ways it is better than the main article. I also think that the main article is better than this one in some important ways.
I can spot some obvious minor flaws (e.g. grammar, wording) here. I also can spot one or two larger mistakes made when tkorrovi made what is an obviously honest and well-meaning attempt to incorporate views that he himself does not hold. Experience tells me that correcting these errors here might be problematic.
I want to incorporate some of the main article's talk page into the article itself. Then the two pages need merging.
Paul Beardsell 14:23, 22 Mar 2004 (UTC)
OK then, thank you, prepare it here before merging. Sure it needs work. I have some hope that we may agree. Not so many people interested in this article anyway, so when we the only ones who talk about it don't agree, this would be highly unreasonable -- we would create weakness where there could be strength. Tkorrovi 22 Mar 2004
I think a merger needs to be done quickly and might not be perfect. If it is not done quickly then for a time we still have multiple versions which then allows for further differences to occur. I think we must allow for temporary reductions of quality and even loss of some content. Sometimes going forward must allow for the occasional backward step. We will soon recover from any mistakes made. What I think you are suggesting is that there must be a consensus to have a new version, that you would still like a veto. Paul Beardsell 16:17, 22 Mar 2004 (UTC)
Yes it's better to reach consensus in discussion, at least in the most important thing -- how to organise the article. I suggest the same way as in NPOV version with the comment in the beginning, that views must be separated, included. Because it's clear that there are different views what remain opposite, like the "strong AC" and "weaker" AC schools of thought. Tkorrovi 22 Mar 2004
And instead of merging (or as a way of merging) I suggest adding everything what is considered to be missing into this version, and then just replace the main article with this version. I think it would be much easier to do it that way. What you think? Tkorrovi 22 Mar 2004
If you bring everything accross to this page and delete the current version, renaming this, the edit history will be lost. That is not necessarily a bad thing, but it is not my preference. A way around this is not to rename but copy'n'paste back onto Artificial consciousness from here.
Can I also suggest, if you are going to take this huge task on, that you bring over everything to here, word for word, without editing, and only edit when you get here. Then the change log of the merge exercise will all be in one place.
I think once it all is one place that possibly some of the merging work can be shared, if we are careful, but if you would prefer to have a go at it first that's fine by me, as long as we can tell from the log what has happened, and so I can revert you no more than three times. (Joke!) Paul Beardsell 17:33, 22 Mar 2004 (UTC)
No, I exactly thought that we make the changes here, and then copy and paste the entire version into the main article. Edit history would not be lost then. But bringing over of what you talk about? I brought over everything from main article into NPOV version what I considered necessary, there was more but if I didn't include it, then it was just because I (and Matt Stan also) would like the article to be a little shorter. I don't want to bring more, but you feel free to do so, if you consider it necessary. I made a few spelling and grammar corrections to what I included from main article, please compare these paragraphs to these in the main article, and change them (or revert my changes) if it is not the way you like. In particular, I changed a bit the wording of "Strong AC" argument, do you agree that it is much more clearly said that way? Tkorrovi 22 Mar 2004
But a problem was caused. You brought accross things but, in one or two areas, misinterpreted what someone else said. That may have been their fault, not yours, as they might not have been clear in what they wrote. The difficulty is that Wikipedia will not let you compare two different articles to find out what the edit was that you did to the text while it was in transit from one to the other. Please, bring it all over as is, possibly at the end of the article, and save that version BEFORE editing. Then cull, cut, reinterprete, because the author of the corrupted paragraph can then see what has happened. No article should be longer than necessary. But it is sometimes necessary to get longer before getting shorter, as the bishop said to the actress. Paul Beardsell 18:12, 22 Mar 2004 (UTC)
OK, I may do so, but for this discussion it's important to know what exactly you consider that I misinterpreted? I just want to know, it may be important for editing. I didn't want to misinterprete anything, but different people always understand things slightly in different ways, this is why it's better when several people look at the text, one notices what other doesn't. Tkorrovi 22 Mar 2004
Would it be worth summarising what the issues are about this topic? I'll have a go, and perhaps we can reach some consensus:
1) The epimestological question of whether artificial consciousness is possible or whether the term is an oxymoron, i.e. that by definition consciousness cannot be artificial because it wouldn't then be consciousness at all. To get around this, we either have to remove the need for thought from the definition of consciousness or change the title of the piece to simulated concsiousness.
2) The question of whether consciousness or indeed artificial consciousness necessarily requires a predictive capability, as suggested in the original article. Evidence from alternative sources should be provided to justify the original claim, and I have suggested that the alternative of anticipation should be included to cover his requirement.
3) The question of whether it is possible to define an average human for the purposes of setting criteria against which to measure the capabilities of an artificially conscious machine. No attempt has been made to indicate what this average is, and I have suggested that even a totally paralysed person or a highly mentally retarded person is still deemed to be conscious by humane people. I woukld add that a newborn baby or an Altzeimer's sufferer are also both conscious, although the latter probably in an impaired way.
4) the question of whether merely the ability to demonstrate consciousness of some phenomenon should be deemed as consciousness (consciousness in the transitive sense) or whether consciousness is absolute and doesn't require its experiencer to be conscious of anything in particular in order to be conscious (consciousness in the intransitive sense). If we accept that any inanimate object that is used to engineer some outcome is itself conscious by virtue of its function, then there isn't really anything to artificial consciousness and it could simply be defined as anything instrumental in achieving some end.
5) The question of reliable academic sources to back up claims made about a technical subject for the purposes of its entry in an encyclopedia, which I haven't seen any evidence of yet.
Matt Stan 18:21, 22 Mar 2004 (UTC)
That's a very good approach to take and it needs some conscious attention, it being 2:30AM here I will be back later. Paul Beardsell 18:37, 22 Mar 2004 (UTC)
[1] The term is bad but it was coined by others like Igor Aleksander and it is not for us to change it. You may start "simulated consciousness" page of your own, this may be even better term, but unfortunately is not much accepted term. But the term is just term, it must be defined and this determines the meaning. It does not necessarily have to mean *artificial* *consciousness*. it may also mean simulating of consciousness by artificial means, and this is not oxymoron. Whether to remove thought is another question, we may also simulate everything what we can simulate about thought. But by some views there may be need to exclude it, all these views must be included in the article.
[2] The article by Igor Alexander where predictive capability is considered as one requirment for AC is included to NPOV version. For NPOV all requirements what may be considered necessary should be listed, including anticipation, awareness etc, but in addition to predictive capability, not to delete one requirement because other requirement is included.
[3] What means average person is more or less self-evident for most of the people. They are considered conscious in other context (medical, whether person can move his body or not). People often don't say that mentally retarded person has a consciousness of average human. New-born baby is another question, this is again a matter of views, but it is likely more than any artificial consciousness, in a sense that by learning it can mostly achieve all abilities and aspects of consciousness of average human.
[4] These are the term "consciousness" defined to be used for specific contexts again. One view is to proceed from the most general definition, and this demands almost all mental abilities of average person to be present for it to qualify to have consciousness.
[5] Of course sources must be included, but as term is used, and also in some sense important, it qualifies the entry into encyclopedia much more than some other subject.
And maybe it's better to discuss a bit more slowly, there would not be enough quality of such discussion this way Nothing wrong in asking 5 questions at once, but is it always the best.
Tkorrovi 22 Mar 2004
A link "lectures by Igor Aleksander" to show that the term "artificial consciousness" has been used in scientific context http://www.i-c-r.org.uk/lectures/spr2000/aleksander13may2000.htm Tkorrovi 22 Mar 2004
Also see http://www.ph.tn.tudelft.nl/People/bob/papers/conscious_99.html
Thank you indeed Matthew for the paper.
About oxymoron. I talked to several people, including some PhD-s about artificial consciousness and not all consider artificial consciousness just a nonsense. And then again, some scientists consider for example consciousness studies (and everything related) nonsense as well, this is a matter of views again. So if you don't want that anybody considers what you do nonsense, then don't work on anythink what is related to consciousness. At the same time artificial consciousness is likely to be an important link between consciousness and AI. What most of the people I talked to say though is that the term "artificial consciousness" is somewhat misleading because of the words used. Without knowing any definition or anything about it, the first association would be human consciousness built artificially, or even consciousness to replace natural consciousness (very bad meaning). Many people don't like the idea that consciousness can be made by artificial means and think that it must be some cranky effort to build an artificial human. Without definition one cannot realize that a mere simulation of conscious abilities, to be as close to natural abilities as we can get based on our knowlegde of the subject, are meant. Some efforts also involve simulating artificially certain feelings (emotions). Some are the systems intended to be unrestricted enough to be used in enabling the development necessary to achieve certain abilities of consciousness like prediction, or then imagination by enabling creation of different alternatives in certain circumstances. But these are systems not so immensely complicated (often also not easy though), at least very very far from any artificial thinking at the level of the human. So yes, the term is bad and misleading, "simulated consciousness" or similar may be much better. But it was started to use the term "artificial consciousness" in scientific context, and my opinion is that it's not for us to change it. But if you think so, feel free to create "simulated consciousness" article, but "artificial consciousness" article must remain, because this term is in use. Maybe it must be written that some people think it's nonsense, but then to AI also, because some people think that this is nonsene as well. Maybe once AI article was edited by people who thought that it's a failed field (have such impression when I read the older entries), but later people who remained to edit it were people who didn't think so. Comparing AC to AI, AC is by far less significant of course. These were my somewhat random thoughts about the subject. Tkorrovi 22 Mar 2004
Matthew, notice that thoughts, and even feelings were included in the definition of consciousness in the paper you presented. What I don't like though is using the word "soul". Even if it has a strictly defined and objective meaning, I think that it's not right to use such word in scientific context, as it comes from religion or belief. There is such a variety of different ideas and interpretations concerning artificial consciousness, artefactual consciousness, simulated consciousness etc, so that the only possibility is to write different views separately, there is no general consensus about that in science yet, but still the research is being done. Tkorrovi 22 Mar 2004
I am puzzled about the idea of consciousness being associatd with prediction. I thought that perhaps it meant anticipation in the short term, i.e. immediate cogent reaction to imagined possible events (including internal events as might emanate from thought processes). Can anyone explain, in relation to consciousness, what is being predicted and by whom, and why this is thought to be an essential component of consciousness? Matt Stan 08:49, 25 Mar 2004 (UTC)
In accordance with my Concise Oxford Dictionary, "anticipate" in the wider sense means "foresee", "regard as probable" etc, so it means the same as "predict" ("foretell"). The difference of the word "anticipate" is that it has a narrower meaning "deal with before the proper time". If you talk about immediate reaction to imagined events, then you most likely consider that meaning. No, "predict" is not used in that sense in AC. In the paper I added to NPOV version, Igor Aleksander talks about "Ability to predict changes that result from action depictively". It's also said in paper by Rod Goodman and Owen Holland www.rodgoodman.ws/pdf/DARPA.2.pdf that "Good control requires the ability both to predict events, and to exploit those predictions". Why we need to predict changes what result from action is that we can then compare them with the events what really happened, what enables us to control the environment and ourselves (ie act so that we can predict the results of our action). This is also important for training AC -- the system tries to predict an outside event, and if this event indeed happens, then that gives it a positive signal. What is necessary for that is imagination, ie generating all relevant possibilities for certain case, for what the system must be very unrestricted. And what is also necessary is some sort of "natural selection" so that only these models (processes) survive, what fit in their environment. So the events are imagined not in order to react to them immediately, but they are stored to exploit them later, the time when the predicted outside event should occur. Tkorrovi 18:50, 25 Mar 2004 (UTC)
This hinges on the word "necessary". Anticipation is a very useful, desirable attribute for a conscious being to have. But that does not mean it is a necessary attribute of consciousness. I agree with tkorrovi about all the advantages of anticipation, just not that it is necessary. That it is necessary has not been shown. Desirable, yes. Useful, yes. Necessary, no. Therefore that is supposedly necessary does not merit a prominent, headline, first definition position in the article. Paul Beardsell 09:13, 26 Mar 2004 (UTC)
If a human is passive then it can be conscious. So, "passive animate objects" can be conscious. If all "inanimate objects" can not be conscious then the big question is answered and, in my view, we can go home. So, passiveness disqualifies an inanimate object from consciousness but not an animate one. Which is just too blatant an adoption of a priveleged position to be allowed.
But only one of many. Luckily, for my argument, the thermostat is not passive.
Paul Beardsell 09:06, 26 Mar 2004 (UTC)
No. Nonsense. Any word now seems to mean consciousness. Or not. When the word is used in relation to a human then it means consciousness. When the same word is used in relation to a machine then it means not consciousness. Or the word is inadmissabable because, why? It's a machine! Matthew, it is not an unreasonable prejudice to have, but it is unreasonable not to recognise it as a prejudice: You as good as define consciousness as something only a human (or possibly some higher animals) can have. I put you to the same test I put Tkorrovi: Magic spark or new physics? Paul Beardsell 17:15, 26 Mar 2004 (UTC)
My method can be summarised as follows:
Is that such a controversial approach? We are still, unfortunately, bogged down at Step 1, or at least some of us are!
Final point here: the use of artificial (or simulated, which effectively comes down to the same thing, from earlier discussion) merely denotes the idea of consciousness being made by means other than the natural means by which consciousness usually arises. Any attempt at artificial consciousness deployment that fails the tests will by definition preclude the artifact under test from being deemed conscious (or artifically conscious, if people wish to make that distinction), i.e. from possessing (artificial) consciousness.
How's that? Can we move forward now? Have you still got your Lego set [1], and the time and money to become the pioneer. I will award a prize of one anonymous Wikipedia log in to the person who comes up with the first implementation. (See User talk:Paul Beardsell for decryption of the last sentence.) Matt Stan 19:01, 26 Mar 2004 (UTC)
Whether or not I am in a double-bind in another discussion has no impact here. Or should not! You are seemingly irritated with me pointing out contradictory use of language and asking you basic relevant questions which you do not address. I suggest this is because you are reluctant to challenge your own fundamental beliefs. :-)
When you assert that simulated and artificial are the same you must recognise that this does firmly peg you into the "artificial consciousness can never be real consciousness" school. You insist on a recipe for consciousness which implies a set of values which makes consciousness human-like: So you are firmly pegged in that school too. These are perfectly reasonable if anthropomorphic views to hold. But you seem to deny the admissibility of other views.
What if (real) consciousness could be built, but not one that was sufficiently human-like to pass your tests? That would be a real tragedy: Refusing to recognise a possibly rich otherness.
After that can I buy you a beer sometime between 9 April and 1 May? Of course!
Paul Beardsell 04:40, 27 Mar 2004 (UTC)
Microbotics Domotics Domobot Digital pet Tamagotchi Neopets
And so, on to my next question. What will it be for, assuming we are talking about engineering an artifact, the exact requirements for which have not all been defined? It could end up an interesting curiosity, that might even warrant putting in a thermostat alongside it in the telling of the story of how it came about. Or what else could it be? I'm suggesting that if we made the initial criteria for passing the first test, then we would have achieved something which could be improved, and the notion of giving it heuristrics of its own opens endless possibilities by which it might far exceed the constraints of mere human consciousness. As for for being accused of anthropomorphism, I am not arguing necessarily that the model should be a baby, just that it should be considered, and alternatives proposed. And that for us to be able to say that AC exists, rather than just being an idea on a discussion page, then we need some artifact to mention. At the moment the only other candidate we have is a thermostat. I don't see why we should be averse to the idea of using ourselves as our model for something we only understand. dogbot If we are talking about a consciousness that is other then I suggest we switch to entanglement theory and the idea that the future can alter the past, and see whether the patterns that are observible in the universe manifest what one might construe as a godly consciousness. But that would be neither simulated nor artificial. Please remind me of the qualities of this otherness. What should I read (or re-read) in order to gain its appreciation? Matt Stan 08:43, 27 Mar 2004 (UTC)
Matt Stan 11:09, 27 Mar 2004 (UTC)
While I attempt to craft a more thoughtful response, this struck me after my last contribution:
One view is that AC will not be real because we are too dumb to build real C. This is a defeatist view which tempts us to give up before we start, but maybe it is a realistic view. AC which is really C might be so otherly (is that a word, I asked), othernessly (also no good). Otherworldly! There is my example. Should we be visited by aliens how will we test that they are conscious? Easy! After testing their intelligence using the Turing test we will test their consciousness with the Stannard test. We will test these bilaterally symetrical, bipedal, two-eared, swivelly-eyed aliens with our anthropomorphic (both meanings) tests!
If we assume aliens exist (at least for this argument) we have no good reason to expect aliens to be bipedal or even to breathe air. Yet we would expect (some) aliens to be conscious, I suggest. But that consciousness is less likely to be human-like, I suggest, than their locomotion is to be bipedal.
By this thought experiment I hope to have established that consciousness of a non-human type is possible or, depending on your cosmic view, likely.
Paul Beardsell 11:20, 27 Mar 2004 (UTC)
As to the quantum entanglement point: Certainly it is not me who seems to want to invoke new science or ignore old science: I have been pointing out that the existing science indicates there is no obstacle to AC being real. Paul Beardsell 11:20, 27 Mar 2004 (UTC)
I'm still not clear on the issue you take with my notions about artificial vs simulated. I had intended that they should refer to the same thing. but was pointing out that artificial consciousness is oxymoronic because once AC is achieved it ceases to be artificial and becomes real, whereas simulated consciousness can be as real-like as we make it and no semantic problems arise. Are you suggesting that simulated concsiousness is actually something different, which I haven't taken into account?
I was also indicating that the test should be that humans should be the judge. I was not stipulating what the business requirements are, but putting forward a set that might meet the test requirement. You might come up with a philosophic argument that identifies an artifact as conscious, as per that philosophic argument, but that would not count in the popular view as consciousness. When the aliens come, will they expected to give us logical proofs of their consciousness (to help us decide whether their consciousness, such as it is, is real or artificial?), or will it just remain a human perception as to whether they are or not? The less like us that they are the more difficult it might be judge, but I am maintaining that ultimately we can only judge by what we consider to be consciousness, based on our own experience. Therefore consciousness is necessarily anthropomorphically defined. And it has to interact with humans at some level in order to be tested. Argument from ignorance Anthropic principle Matt Stan 11:56, 27 Mar 2004 (UTC)
You raise two points I can readily address: The definition of artificial and the anthropic principle.
Back when there was no assisted locomotion other than that provided by animals had the concept of artificial locomotion been discussed some might have held that it was impossible. That any locomotion so achieved would be simulated, not real. And therefore simulated and artificial are synonyms. But they would have been wrong. I suggest that we stick to the dictionary definition: artificial - made by man or otherwise assembled, not arising naturally.
Formally: If A and B are both properties of X then it does not follow that A is B or that A is a subset of B or vice versa. A and B can be disjoint, distinct. Let X be "consciousness", let A be "simulated", and let B be "artificial".
And had the test of detecting something to be locomotion been that its propulsion must be similar to the locomotion known, by legs, then the motorcycle would have failed the test. That consciousness must be tested against and by humans is your assertion. That it can be so tested, I agree. But a more objective test would be useful. You seem to use the fact that we are conscious as a handicap to us recognising consciousness elsewhere. I am not a snail yet I can recognise a snail. If I were a snail recognising a snail surely would be easier, not more difficult? Let us decide on a simple, non-anthropomorphic (I believe I have shown this is necessary when dealing with aliens) definition of consciousness, and procede from there.
The anthropic principle is where you go when forced. It is not supposed to be your first refuge.
Paul Beardsell 12:27, 27 Mar 2004 (UTC)
Damn! I wish I had used flight not locomotion. Then I would characterise your argument as saying that for something to be flying it must have flapping wings. No, I would say, let's look at the definition of flight. Here, too, I say, let's look at the definition of consciousness. Is it self-aware? Then it is conscious. Is it man-made? Then it is artificial. Paul Beardsell 12:43, 27 Mar 2004 (UTC)
I'm happy to stick to the dictionary definition: artificial - made by man or otherwise assembled, not arising naturally, though simulate is defined as: Imitate the conditions of (a situation or process); spec. produce a computer model of (a process). (SOED mid-20th Century usage). If the AC machine were to contribute to its own consciousness by virtue of its heuristics, would that instance of its consciousness be artificial, or could it be said to have naturally arisen as a result of it having received a wake-up call from an external source?. I think simulated indicates a more robust approach, and of course is less anthropomorphic than artificial. Matt Stan 13:57, 27 Mar 2004 (UTC)
Have we been round this loop yet? Self-awareness implies to me the notion of the self that is aware and that which it is aware of, and allows this for stimuli arising in its internal environment, but it leaves out the idea of the external environment, i.e. awareness that is not self-awareness. That it is aware of either is determined by it's paying attention to one or other (or both). Therefore, this self-awareness is subsumed within attentiveness - it is just one part of it. The notion of the self that is aware/attentive is an important prerequisite though. Perhaps self-awareness is the wrong term for the fundamental characteristic of consciousness and should be replaced with awareness of environment, where environment includes input from external sources via senses and input from internal resources such as memory.
Also see [Consciousness-only].
About awareness http://tkorrovi.proboards16.com/index.cgi?board=general&action=display&num=1080491783 Tkorrovi 16:20, 28 Mar 2004 (UTC)
You are still using the term average human, but I maintain there is in any event no such thing, and that a human who manifests the minimum requirements of consciousness is nevertheless conscious. Therefore we should aim in the first instance that an AC implementation emulates the minimum requirements rather than any notional average. The problem is hard enough without making it more difficult unnecessarily. Matt Stan 21:35, 28 Mar 2004 (UTC)
I was interested to read in the articles starting with Artificial intelligence (which, incidentally, cover much of the ground we have been attempting to cover here) that one of the commentators had indicated that a body is an essential prerequisite for digital sentience. I need to go back to those articles to resolve what in effect are the intersections/distinctions between AC and other forms of artificial humanity (or whatever we want to call it), but I pose here the question as to whether the artifact that we are postulating for the purposes of proving the existence of AC must necessarily have some robotic element, i.e. that it cannot be entirely entirely absorbed in self-awareness, or put rather more crudely, onanistic. For example, even if we decided not to build a mechanical robot to demonstrate AC, the representation of an image on a screen, coupled with a camera pointing at whoever was watching that screen, could help to give the AC machine the necessary response mechanisms to be verifiable. Without such, or similar, could it ever be convincing? I suppose I am specifying something very basic, i.e. that the thing needs outputs, in order that we can observe it; and inputs, in order that we can test it. These may not be prerequisites of AC per se, but for the purposes of testability I am suggesting that they are prerequisites of any verifiable implementation. Matt Stan 07:50, 29 Mar 2004 (UTC)
To avoid causing offense I thunk you should say that Hawking understands things better that "you or I". Paul Beardsell 13:47, 29 Mar 2004 (UTC)
I agree about reading the other Wikipedia articles: Consciousness needs tidying up but there is some good stuff there. Paul Beardsell 13:51, 29 Mar 2004 (UTC)
I reckon the body could be simulated but the consciousness be entirely real, even if artificial. The conscious entity would, in this example, be deluded about the existence of its body. Paul Beardsell 13:56, 29 Mar 2004 (UTC)
In this section I suggest we list those attributes of consciousness which are necessary. I.e. If any one of the listed attributes is missing from an entity then the entity is not conscious. Having all these attributes does not necessarily make the entity conscious either!
The conscious entity should know something about its own state. The thermostat knows if it is too cold or too hot.
It should know its own physical limits, it's (real or simulated) body. Insects qualify here. Trivially: A tamper resistant device could be said to have this.
It should understand something about its identity. That it is distinct from other possibly similar objects. Many vertebrates seem to get this right. Trivially: Some devices (e.g. RFID) are accutely aware of their own serial number.
Paul Beardsell 05:10, 30 Mar 2004 (UTC)
My problem here is with know. How do you know if a thermostat knows anything, as distinct from how I know you know anything. In your case, I can ask you, 'How do you know you are too hot?' as opposed to just 'Are you too hot?'. Not so with a thermostat. there is a distinction between knowing and just being - it's an epistemological question that needs addressing in terms of AC entities. Matt Stan 08:08, 30 Mar 2004 (UTC)
When I say I know something you recognise that I am at least superficially similar to you, and that I might, therefore, mean something similar by that term as you do. You have said that you are forced into the anthropic principal in these circumstances when talking about consciousness because of problems like this. Interestingly, when I say I am too hot you know what I mean but you might disagree and think it too cold! When I say it is hot I am not really making a comment only about the temperature: Amongst other things I am referring to is how quickly I am gaining or losing heat. This is a complicated function of several factors: My current metabolic rate (a function itself of how recently I ate, recent exercise etc), the wind speed, the humidity, how I am dressed, whether the heat I receive is from radiation or conduction, etc. When a thermostat "says" it "knows" it is too hot it makes a reliable comment about the temperature. I do not. Yet you allow me "knowledge" about the temperature but you deny it of the thermostat. Essentially, once again, you reserve the word "know" for humans. Fine, say I, what word will you allow for thermostats and I will use that for humans too. Paul Beardsell 08:52, 30 Mar 2004 (UTC)
You know I am too hot because I turned the airconditioning on. You know the thermostat is too hot because it has turned the airconditioning on. Paul Beardsell 09:09, 30 Mar 2004 (UTC)
This page was copied to Talk:artificial consciousness. As NPOV version is merged, we should continue the discussion there. Tkorrovi 12:34, 26 Mar 2004 (UTC)
- The discussion has continued here. Matt Stan 08:10, 27 Mar 2004 (UTC)