This is the
talk page for discussing improvements to the
Operant conditioning article. This is not a forum for general discussion of the article's subject. |
Article policies
|
Find sources: Google ( books · news · scholar · free images · WP refs) · FENS · JSTOR · TWL |
![]() | This ![]() It is of interest to the following WikiProjects: | ||||||||||||||||||||
|
A summary of this article appears in learning. |
What is it that needs simplyfing? Operant conditioning is difficult to understand and does not lend itself to simple explanations. — Preceding unsigned comment added by 193.216.61.5 ( talk) 09:39, 4 September 2005 (UTC)
I don't see what need simplifying. Maybe it's because of my Psychology background that this page seems crystal clear to me. — Preceding unsigned comment added by 216.232.63.213 ( talk) 18:08, 8 November 2005 (UTC)
I believe the article has transposed the definitions for Negative Reenforcement and Negative Punishment. Negative Reenforcement is the removal (negative) of a reenforcing stimulus (such as a child's toy) to discourage a behavior. Negative Punishment is the removal (negative) of a punishing stimulus (such as a loud noise) to encourage a behavior.
I haven't edited the article because I may be missing something. — Preceding unsigned comment added by 155.76.223.253 ( talk) 19:51, 10 November 2005 (UTC)
"Negative Reenforcement is the removal (negative) of a reenforcing stimulus (such as a child's toy) to discourage a behavior."
"Negative Punishment is the removal (negative) of a punishing stimulus (such as a loud noise) to encourage a behavior."
As an attorney, my interest is in discipline (help) rather than punishment (harm). I consider the best example of negative punishment to be the suspension of social and economic privileges. This is different from positive punishment which causes harm and encourages retaliation. In discipline, the suspension of privileges can be restored as milestones are met (negative reinforcement). Both styles of punishment discourage related behavior and both styles of reinforcement encourage and direct desired behavior. --Eugene Patrick Devany-- blogging on Quora — Preceding unsigned comment added by EugenePatrickDevany ( talk • contribs) 22:09, 25 July 2022 (UTC)
Santaduck 03:13, 20 January 2006 (UTC)
At first it said the person worked w/ cats but then it said rats!! Which one is it? — Preceding unsigned comment added by Lilsaalex ( talk • contribs) 15:41, 8 February 2006 (UTC)
I'm curious as to what you guys think was the most effective. And if anyone thinks that which animal you use really matters. As humans, we can say we are the dominant species and it trickles down, but when it gets to the lower brain capacity of different animals, do you think it played a big role? Justin.edwards ( talk) 02:12, 11 December 2017 (UTC)
The consequences link doesn't really make sense. 128.213.28.129 20:43, 15 February 2006 (UTC)
New section includes this paragraph:
So I don't understand what the point is--that allowing the dog to indulge prey drive when they do something correct is NOT a positive reinforcement? It seems to me like it is. Dog does the weave poles really fast, they get the tug toy. Dog doesn't go as fast, dog doesn't get to play tug. How is that not a positive reinforcer? Elf | Talk 00:55, 24 February 2006 (UTC)
The section on prey drive is inconsistent with the rest of the article. Not everyone agrees that tracking or working dogs have to be rewarded every time; this is more the author's bias than fact, especially without seeing any citations. It is stated that prey drive is an example of an exception to operant conditioning. This is conjecture, as again no sources are cited. Giving the toy or throwing the ball is an addition of something the animal wants - therefore it is positive reinforcement. Even though this is not a food reward, it is a conditioned reinforcer. If the animal does something correctly, it is given this reinforcement. We really don't care why the animal wants the reward. The fact that it works for the reward makes it operant conditioning. — Preceding unsigned comment added by 148.168.40.4 ( talk) 19:06, 7 July 2006 (UTC)
OK, going by a suggestion on the new contributor's question page, I'm going to lay out what I think should be done with this section. The whole "drawbacks and limitations" section needs to be redone. Obviously, Behavior Analysis tends to draw a lot of ire and so the popular insistence for such a section, no matter how badly done, is very strong. However, the opening paragraph on the "drawbacks" section illustrates this problem nicely. A Nobel laureate is cited as stating that operant conditioning doesn't take into account "fixed" reflexes, yet in the very same paragraph we have an explanation (though incomplete) about how operant conditioning isn't supposed to deal with reflexes to begin with because the form of a reflex is, as mentioned, biologically fixed in form, whereas operant behavior is defined as behavior whose form is modifiable by consequences. This demonstrates something that BF Skinner himself noted, that a person's criticism of Behavior Analysis is inversely proportional to how much they actually understand it (a phenomenon that also holds true for other scientific models, like Evolution by Natural Selection). I intend to keep that criticism of the Nobel laureate in the article, but expand upon the paragraph to explain Skinner's rationale for not including reflexes as a form of operant behavior.
Also, the entire "prey drive" portion needs to be removed. In its place would be a listing of factors that alter the effectiveness of consequences, factors such as what I previously mentioned about "satiation." It could look like this:
I will wait approximately a week (maybe more) for further feedback about my intended alterations. Afterwards, I will see how much of what I have included above I will implement. Lunar Spectrum 05:17, 30 August 2006 (UTC)
Nearly a month has passed and there is no comment about my suggestion. I think I will simply add what I have outlined above in a new section and deal with the prey-drive section some other time. -- Lunar Spectrum | Talk 02:05, 26 September 2006 (UTC)
I am not sure if I should have done it or not; perhaps I ought to wait and think and reflect before I edit, but the section that mentioned prey drive was SO far outside of the article that I rewrote it so that it has something to do with the discussion of FAPS versus OC. Whomever wrote the one that I edited out doesn't understand OC or FAPS but would surely like to convince the rest of the world that using prey drive is a valid method of training. At best, it is sloppy terminology that doens't really belong in any training program that is developed using operant conditioning as its model. So, if I went over board in my edits, I do appologize, however, the first bit was really, really bad. I am conducting a workshop this weekend on operant conditioning, and I will go back and areferences after the workshop is done; sorry but I am swamped at the moment. Suenestnature ( talk) 05:24, 2 January 2009 (UTC)suenestnature Suenestnature ( talk) 05:24, 2 January 2009 (UTC)
I was reading the section in the article on Thorndike and his theories, and noticed that there was a passing reference to Skinner's research on reinforcement, no doubt to do with the "Skinner Box" experiment. Considering he was one of the greatest researchers in this area of psychology, could a section be added to explain the principals and methodology of the experiment? 58.169.141.5 23:46, 28 April 2006 (UTC) Nick
For what it's worth, note in passing that Karen Pryor: Don't Shoot the Dog! defines negative reinforcement and punishment differently. To Pryor, the main difference is timing. A negative reinforcement is something disagreeable that the subject can immediately stop by changing his behavior. A punishment is something that happens later that the subject cannot immediately stop by changing his behavior. If Auntie frowns when I put my feet on the coffee table, and stops frowning when I take them off, that is what Pryor calls a negative reinforcement. If I get a bad grade on my report card that reflects all the work I haven't done in class this year, that is what Pryor calls a punishment. Pryor notes that even though punishment is everyone's favorite method of untraining unwanted behavior, it rarely works because the subject usually has difficulty connecting the punishment with the behavior; often, the subject learns to evade punishment instead.
The behaviorist psychologist H. J. Eysenck talks in similar terms in his book Psychology Is About People, Chapter 3. He insists on talking about positive and negative reinforcement instead of reward and punishment, despite the clumsiness of his preferred terms, because with rewards and punishments the timing may make it difficult for the subject to connect the result with the behavior. — Preceding unsigned comment added by 4.232.102.216 ( talk) 20:28, 7 May 2006 (UTC)
For what it's worth, it's all too nit picky, if you want to be pure, non-implicative animal behaviorists, P+, P-, R+, R- is this simple.
if a child screams, a parent picks them up, and the child stops screaming, like it or not that is P+. If you expanded the time line and looked at recurring behavior, you might see increases in intensity and frequency and then it you understandably label it R+, however purely analytically, you cannot imply pain and reward into P and R just because we think rats like cheese or dislike tail shocks. We cannot know for 100% certainty the intentions of an animal and their perceptions. We can only observe what causes behavior to go up and down. Dogs and Cats sometimes love to be pet, other times it's very punishing to them. R- is a big annoyance for me because people always use examples of physically aversive loud noises, ear pinches, etc., and while it's often the case, we are already analyzing the reinforcing agent due to our own conditioned emotional responses. Let's look at a receptionist at a doctor's office. She puts out candy and people smile more in the office so she puts candy out every day after that. Now was she reinforced by the increase of smiles (R+) or was she reinforced by the removal of frowns (R-)? It completely depends on the individuals temperament and you would have to ask her, hey, what do you like more, no frowns or smiles? Animals can't speak to us on those terms so we cannot assume what the likely reinforcer is. All punishments and reinforcements have this duality. Did the rat get reinforced by cheese because they like cheese or did the rat get reinforced by the loss of hunger? Does a child in timeout curb undesirable behavior because they lost the ability to play with friends? Or do they curb that behavior because they don't like the time out room or stool? Typically we argue it's the loss of the ability to play that makes a timeout P-, however the emotional response associated with the time out room or stool may actually cause the child to respond stronger to the instruments of the time out then the loss of opportunity, making it P+.
There are many times with animals that non-physically aversive stimuli are punishing, and non-physically rewarding stimuli are reinforcing, because you cannot dismiss the effect of Pavlov in understanding an animals conditioned emotional responses to stimuli. Some kids like time-outs, some people enjoy cutting themselves, so cluttering up Operant Conditioning with words like "rewarding" and "aversive" are anecdotal and non-scientific, as well as making it more confusing to Billy and Susie. Yes R- is often easily viewed as an escape, but that is not it's definition and can cause confusion in the less plastic mind. PB- 11/7/10 11:11PST —Preceding unsigned comment added by 98.247.244.101 ( talk) 19:18, 7 November 2010 (UTC)
I'm not too sure what goes ineffective when extinction occurs. I assume its the reward (the pellet)... but then it seems like the behavior became extinct. Regardless, I'm confused and this paragraph ought to be clarified.
Extinction is a related term that occurs when a behavior (response) that had previously been reinforced is no longer effective. In the Skinner box experiment, this is the rat pushing the lever and being rewarded with a food pellet several times, and then pushing the lever again and never receiving a food pellet again. Eventually the rat would cease pushing the lever.
I would also explain in the intro that Operant Conditioning is not absolute - it doesn't ensure that the subject will always perform a task (as using the prey drive I gather does.) That little factoid came out of the blue in that section. — Preceding unsigned comment added by 69.109.181.222 ( talk) 09:11, 27 July 2006 (UTC)
In the section "Factors that alter the effectiveness of consequences" I included the mention of how certain factors are the result of biology. For example, I mentioned that the principles of Immediacy and Contingency are the result of the statistical probability of dopamine to modify the appropriate synapses. However, the necessity of an entire section devoted to the biological basis of operant procedures is becoming clear. I used the dopamine reference only to support the section about "Factors that alter the effectiveness of consequences," but already more biological references have been added to that section. They are good references and should be kept, but they should be moved to their own section because they do not contribute anything to the subject of the section they are currently in.
I think that the biological section should be the second section, placed right after the "Reinforcement, punishment, and extinction" section. It would be a good way to structure the article to first have exposition on reinforcement, punishment, and extinction procedures, then have a three-part section immediately following it to explain the neurophysiological effects of reinforcing stimulation, aversive stimulation, and extinction. An alternative to this might be to simply add such a discussion to each of the existing corresponding articles on reinforcement, punishment, and extinction. -- Lunar Spectrum | Talk 00:31, 29 September 2006 (UTC)
Useful info on both articles... Schedule of reinforcement should not be an article. Reinforcement probably shouldnt be - both should redirect here. —The preceding unsigned comment was added by Thuglas ( talk • contribs) 05:21, 2 March 2007 (UTC).
I was kinda thinking that so i put a second link to merge schedules of reinforcement into reinforcment. perhaps a little thing on extrinsic and intrinsic reinforcement and secondary/primary reinforcement could be added thuglas T| C 17:37, 2 March 2007 (UTC)
yeah i think that would work primary means food or something secondary means money the differences between extrinsic/intrinsic and primary are very little, but for some reason they remain seperate in my mind
i figure if noone complains in a week or so we should go ahead and be WP:bold ive posted the link on WP psych. i dont think anyone would disagree with this idea. — Preceding unsigned comment added by Thuglas ( talk • contribs) 18:02, 3 March 2007 (UTC)
I think we were on the same page here, but to clarify: I know that secondary/primary reinforcers are not synomous with intrinsic or extrinsic. I think extrinsic, intrinsic, secondary, and primary reinforcement would all fit into the article. (i havent looked at it in a while i just dont like being misunderstood) thuglas T| C 15:13, 7 August 2007 (UTC)
Looking at the articles in question, I think that the Reinforcement article should not be merged into Operant Conditioning. The Reinforcement article has a good level of detail that makes itself stand as an article on its own. Adding to that the proposal to merge Schedules of Reinforcement into Reinforcement, and the amount of redundant content would bog down the entire article. I think that elements of the Schedule of Reinforcement article can be successfully merged into Reinforcement. But Operant Conditioning already does enough of an overview of reinforcement not to warrant Reinforcement being merged into it. That would detract from the broader focus of the Operant Conditioning article, which should be more about the modification of behavior (operant procedures) rather than the details about the tool used to modify behavior (reinforcement). Lunar Spectrum | Talk 01:11, 14 March 2007 (UTC)
Having lot of material on reinforcement in the operant conditioning makes the article too large. Reinforcement deserves separate section than operant conditioning. Rather than a mergefrom the reinforcement, i suggest that appropriate sections be merged to reinforcement. The two articles - Schedule of reinforcement and reinforcement can be merged together Kpmiyapuram 12:17, 10 April 2007 (UTC)
This section currently appears to have material that fits for "biological correlates of classical conditioning" and not those of operant conditioning. Kpmiyapuram 13:51, 11 April 2007 (UTC)
There is some material on Extinction (psychology) in a separate article but i see that the current article on operant conditioning discusses it at more length. perhaps the information could be reorganized or merged. Kpmiyapuram 14:18, 24 April 2007 (UTC)
I don't think it's accurate to relate Thorndike to Operant Conditioning. Skinner's operant was "discovered" by him alone. Thorndike used different terms and explanatory systems. This is very important. Lots of people examined learning in humans and animals before Skinner. None of that was "operant conditioning" because it relied on mediating structures ('expectations', 'drives', etc). The explanatory system is as important as the actual data (perhaps even more so).
Operants were also quantified in the operant chamber - Skinner's invention - which Thorndike did not use.
Moreover it implies that Skinner's position is just another learning theory, and it is not. This is an attempt to rewrite in the dead theories of Thorndike as "operant" theories which have become popular, or scientifically validated. Thorndike was important in his little way. Put his theories on his own page, or change the name of the page to "instrumental learning". Operant = Skinner != Thorndike.
(-Florkle!)
—The preceding unsigned comment was added by Florkle ( talk • contribs) 07:25, 16 May 2007 (UTC).
I have added a refutation of the thorndike extension article and cited Chiesa. This whole article is problematic in its treatment of reinforcement theory which is not very "clean" in its presentation.
Moreover the digression into the neurochemistry of reinforcement is something that Skinner has rejected since 1938 when he dismissed physiological explanations as appealing to a "conceptual nervous system (CNS)" and later.
-- Florkle 06:23, 17 May 2007 (UTC)
Why are the sections "verbal behavior" and "four term contingency" at the beginning of the article? The latter seems unneeded and the former seems like it should go much later, if at all. And why do we have this paragraph arguing that Skinner's work wasn't based on Thorndike's? Is this information relevant to discussing what operant conditioning is? If anything, I think that should be moved to a separate history section. I'm also surprised reinforcement learning isn't linked in this article, but I'll toss that into the "see also" section now... digfarenough ( talk) 13:53, 17 May 2007 (UTC)
I also think the new additions disrupt the flow of the article. They certainly might have their place somehwere in it, but right now it seems a bit random. And it also seems that the biological section was moved from 3rd section to, apparently, the very last??? To my thinking, the biology section should be near the beginning since despite being the most heavily disparaged area of psychology, operant conditioning is more solidly grounded in biology than anything else in the field. So I think having that biological basis close to the top is important for the credibility of the subject matter. I think an appropriate structure to the article would be 1. history 2. basics 3. biological underpinnings 4. plus various other special topics. Lunar Spectrum | Talk 00:18, 18 May 2007 (UTC)
Additionally, I think a special section on verbal behavior should have to clearly explain how an understanding of verbal operants extends from operant conditioning, which it presently does not accomplish. It can be done (I'd have to look over some of my old notes and google for some sources), but as an advanced topic it should go somewhere towards the end. Theoretical extentions of operant conditioning, like Skinner's Verbal Behavior, should not greatly detract from the focus of this particular article: namely, operant conditioning procedures, which are factual experimental findings. And it's certainly not a "theory" of operant conditioning... no more than a physicist would call the laws of kinematics a "theory" of kinematics. Lunar Spectrum | Talk 00:18, 18 May 2007 (UTC)
And having checked on the article for Verbal Behavior, I'm now concerned about NPOV issues regarding the user who made the recent section changes in the Operant conditioning article. In the talk page for Verbal Behavior he recently states that he has "nuked all references to Chomsky's" review. Now, I may think that Chomksy's review is completely flawed. But for historical reasons, his review is appropriate subject matter for that article. It would be like having a biography on Abraham Lincoln without mentioning John Wilkes Boothe. Anyway, I'm restoring the biological section to its original place in the article and moving some other stuff down to the bottom until it can be worked out. Lunar Spectrum | Talk 00:18, 18 May 2007 (UTC)
It's a complete myth that Skinner rejected biology's role in behavior. It's true that Skinner was opposed to giving explanatory status to unknown mediating constructs. For example, Chomsky coming along and saying "environment can't explain verbal behavior, therefore I will invent an imaginary Language Acquiring Device and claim it exists somewhere in the brain." That is the kind of hypothetical mediationism that Skinner was against, when people pull mediating constructs out of nowhere. There's a recent article explaining Skinner's regard for biology's role in behavior in The Behavior Analyst. Even more recently is a good 2007 article outlining current research about the relationship between biology and the three-term contingency [1]. The simple fact of the matter is that neurology is the hardware of organic "learning machines." To deny that stimuli and responses are transmitted along neurons and modified at the synaptic level would be rediculous. Consider how over a hundred years ago Darwin had en enitrely environmental account of evolution (natural selection). He had no biological mechanism to explain how variation occured and how traits were passed on. He only knew that it happened, and he had strong evidence for it. Then with the discovery of DNA, Darwin's model of evolutionary change was justified because DNA behaves in exactly the way that Darwin's model predicted. Skinner's behavior analysis is much the same way. His model of learning is being justified by biological findings and biology will ultimately be what redeems behavior analysis as a "hard" science separate from psychology. Furthermore, it's very important to note that Skinner is not the be-all end-all of behavior analysis. To treat it as such is to group it with all the other dead models populating psychology texts. A living and breathing science has the ability to expand and further clarify its subject of study. Lunar Spectrum | Talk 19:44, 26 May 2007 (UTC)
There's been an interesting addition to the article in the form of a "further reading" section. It's an article that purports that cognition is a mediating influence on behavior under classical and operant conditioning procedures. Of course, the idea that cognition plays a role as a mediator of behavior goes against the radical behaviorist position that cognition is itself a form of behavior subject to the same laws as overt behavior, no more and no less. The authors go on to build a case (one that I don't consider convincing) using past research to support their assertion. For example, they claim that if behavior is affected by consequences, then it must be "goal-oriented" and that "expectancies" must be involved and that, therefore, this means cognition governs behavior. This is a clear example of invoking unseen causal agents. They also cite research on rat maze running whose results they interpret to mean that rats form "cognitive maps" instead of learned responses, such as the case in which a rat has learned to run a maze, then during a new trial when a path is blocked the rat uses a parallel path as an alternative, even though the rat has not learned to use that alternative parallel path. I think this does not exclude, to my satisfaction, the influence of the rat's past acquired history of navigational repertoires upon the behavior seen in the experiment. Another area the authors cite is a 1974 review by William Brewer which investigates the effects of informed consequences upon human behavior. These are cases in which neutral stimuli have acquired reinforcing or punishing functions upon a subject's behavior without any conditioning taking place. All of the Brewer (1974) examples, as far as I can see, can easily be explained by stimulus equivalence in which new stimulus functions can emerge through membership in an equivalence relation, which is a thoroughly behavior analytic area of research. Understandably, Brewer (1974) couldn't have known about stimulus equivalence as a behavioral explanation for the results he was seeing... but I think more should be expected of the present authors. It goes on to cite Rescorla (1988) which, for all intensive purposes, seems to be based upon a complete misrepresentation of the behavioral account of contingency. He claims that behaviorists view the degree of stimulus control exerted by a CS as determined by the number of CS>US pairings (which is not true of behaviorists) and goes on to state that he has "discovered" that the true relationship is the predictive value of the CS (which is what behaviorists already consider to be true). He states that behaviorists are therefore wrong (according to his understanding of behaviorism) and that there must therefore be some kind of "goal-directed" cognition going on to account for it.
I could really go on and on... and maybe I'm making a mountain out of a mole hill, but I think this reference really doesn't belong here. I guess I could remove it without much fuss, but considering the level of misunderstanding of behavior analysis among cognitivists/constructivists I could easily see how simply removing it might elicit the reaction that I was removing fair criticism of behavior analysis. Maybe if we left the reference in the article, it could instead be a blue-print for elements of conditioning that could be further addressed in the body of the article itself? At the least it would be nice for others to review the reference themselves before having it removed. What do you think? Lunar Spectrum | Talk 04:13, 1 June 2007 (UTC)
Hi - I kind of think parts of the article sound very defensive and somebody is getting rather uptight about the Skinner/Thorndike debate. I think credit is less important than making sure the point of the article is clear and explains what the current understanding of operant conditioning IS rather than making the article all messy about who made up what and so on. If I want to know who came up with what I don't think I'd come to Wikipedia to get that info. — Preceding unsigned comment added by 203.173.169.91 ( talk) 21:17, 25 June 2007 (UTC)
I have never heard of this term unitl now and its only existence seems to be in Wiki-world forms. I would not be in favor of a link to it on the Operant Condtioning Page.( Mcole13 ( talk) 17:45, 14 July 2008 (UTC))
I am interested in cases in which this has been used on humans for psychological treatment. Despite the effectiveness on pigeons and other less intelligent mammals I find it difficult to imagine with accuracy how operant conditioning could be used for aversion therapy. Links would be ideal. 96.49.141.252 ( talk) 06:06, 3 July 2009 (UTC)
The introduction is too technical and focuses mostly on describing what Operant conditioning is *not*, i.e. it is not classical conditioning, instead of on what it *is*. Could someone with the knowledge in the field write a better introduction and push the details clarifying the distinction with classical conditioning to the body of the article? -- NavarroJ ( talk) 18:09, 3 June 2010 (UTC)
A colleague made
a valuable correction, but followed it quickly with a mistaken edit that i've reverted.
The language they replaced -- "(commonly seen as pleasant)" and "(commonly seen as unpleasant) -- is deficient, but the reverted replacement was much worse, for breaking the desirable parallelism, for equating human attitudes to conditioning phenomena, and for ignoring the low correlation of unpleasantness to negative reinforcement (which parallels the low correlation of pleasantness to positive reinforcement). The problems this presents include:
The language i've restored can be improved upon, starting with taking this into account:
(In an article on the psychology of conditioning and learning, the whole notion of relevance of any(un-)pleasantness other than that in humans corrupts the unassailable status of experimental psychology as science, and drags in the irrelevant arguments like whether there is such a thing as "
what it is like to be a bat".) And the revision i reverted was a step away from, rather than toward, what we need.
--
Jerzy•
t
05:51, 30 July 2010 (UTC)
I've made a minor edit to try to capture at least some of these concerns about the previous wording. Please edit, rather than revert to the previous, if unhappy: At the very least the parenthses would need to be removed because they significantly altered the intended meaning of the sentences. Personally, I think it is important that the layperson is able to understand these basic concepts of reward and punishment, even if at the expense of some philosophical preciseness. Excuse me if I'm not in line with the Wikipedia vision in this view - I make relatively few contributions. But if I notice a section of an article is essentially unreadable I usually try to make some minimum corrections to fix that. Interlope ( talk) 00:32, 2 August 2010 (UTC)
The section on immediacy doesn't have a citation, but here's a potential one. I don't know how to put it in myself but here's the link: http://www.sciencedirect.com/science/article/pii/S037663570400169X JDWLB ( talk) 13:06, 2 June 2011 (UTC)
The article currently states that "instrumental learning was first extensively studied by Jerzy Konorski and next by Edward L. Thorndike." But Thorndike published his work on the law of effect in 1905, and Konorski wasn't even born until 1903. Something is amiss... — Preceding unsigned comment added by 24.42.228.249 ( talk) 18:13, 2 March 2014 (UTC)
At the top of the page, a figure of a tree structure of conditioning is presented. I identified a typing error. "Appetative" should be"Appetitive" See Webster's Collegiate Dictionary. Aartsj ( talk) 07:55, 27 July 2014 (UTC)
This article seems all very theoretical and mostly focused on human behavior. What I was trying to look up was a mention of "operant conditioning" as one of the things zoo interns/volunteers are trained in. Google eventually told me that the conditioning is applied to the animals instead of the humans, to teach them to cooperate with routine health care, transfers between enclosures, and the like. Perhaps a new section in the article is called for? 64.93.124.227 ( talk) 03:18, 12 March 2015 (UTC)
I like the idea as well. It's not difficult to relate all the animals in the world may have interactions with human being's learning techniques and methods. Human are more advanced 'animal' since they have cognitive and analytical ability while they confront of issues. However the operant conditioning should apply for both on animals and human beings. So I think having animal training section should be definitely a plus! — Preceding unsigned comment added by Huskyqqq ( talk • contribs) 06:34, 29 November 2017 (UTC)
Yeah! I was wondering this myself and curious how it could be developed. The dynamic between the animal and a human is so interesting especially thinking about it in a zoo way. I'm sure a lot of things happen with animals at zoo's that people don't even realize are part of operant conditioning. Simply being able to feed an animal can be valuable to figuring out more about this theory pertaining to other animals. Justin.edwards ( talk) 02:18, 11 December 2017 (UTC)
I added more information about the operant conditioning chamber to the part on Skinner. I included some of the initial tasks such as task 1, which is isolating an individual piece of behavior to see how it could be changed. I also mentioned how the variable ratio schedule plays into human gambling problems. I found this important to add while also very interesting. I am curious to learn more about the effects the variable ratio schedule in terms of human gambling. Klaska 24 ( talk) 05:54, 31 October 2017 (UTC)Kelly (klaska_24)
I find this to be very interesting as well and think you have a great point. I wonder if the ratio is completely consistent to slot machines. Something even deeper I think would be fun to research is would humans still want to gamble on the slot machines if the odds didn't have a variable ratio schedule. And if the conditioning of even one win could get them to come back to the casino. Justin.edwards ( talk) 03:08, 11 December 2017 (UTC)
As people giving more intermittent reinforcement of reward or punishment, the pace of people changing their emotional bonds could be dramatically different. So really look out for the Traumatic bonding. I feel like this is a sensitive topic somehow but I want to come up with more examples based on it since there is nothing too much. Traumatic bonding could be applied onto a lot of cases. — Preceding unsigned comment added by Huskyqqq ( talk • contribs) 04:48, 5 December 2017 (UTC)
I added some things about the law of affect as it pertains to operant conditioning. I thought it could use an example without having to leave the page. I think it also ties into other sections so it fits well and keeps the wiki page smooth. Justin.edwards ( talk) 03:04, 11 December 2017 (UTC)
Justin, Good idea. I am glad that you added an example so that users wouldn't have to be redirected to another page. Klaska 24 ( talk) 15:24, 12 December 2017 (UTC)klaska_24
I added a paragraph to the 'Praise' section discussing studies done on the efficacy of Cognitive-Behavioral therapy and Operant-Behavioral therapy. It was touched on earlier in the section but I thought it would be good to elaborate on it. Klaska 24 ( talk) 15:21, 12 December 2017 (UTC)Klaska_24
According to the introduction operant conditioning is the same as contingency management but there is a separate article under that heading and presumably duplication of material. I suggest the two either be combined or the differences be clarified for the reader. I know nothing about the material so can’t work on it but a page I am editing on refers to both which brought me here to understand the difference. It’s just confused be more uNfortunately. Hopefully editors here will be able to sort it out. Dakinijones ( talk) 23:09, 15 January 2020 (UTC)
"In operant conditioning, stimuli present when a behavior that is rewarded or punished controls that behavior."
I cannot parse this sentence. What is the subject of 'controls'? Stimuli present? Then it should be 'control' not 'controls'. Why is there a 'that' after 'behavior'? I can't even tell what the sentence is trying to say. — Preceding unsigned comment added by 86.139.192.79 ( talk) 07:20, 27 August 2021 (UTC)
You say the sentence is grammatical but it is not.
'Stimuli' is a plural noun, and therefore requires the 3rd person plural 'control' as in 'they control', as opposed to the 3rd person singular 'controls' as in 'he controls'.
Furthermore, the 'that' should not be there. 'That' sets up 'is rewarded or punished' as a subordinate clause, with the result that 'behavior' should then be the subject of 'controls', which it is not.
The correct grammatical sentence would be:
"In operant conditioning, stimuli present when a behavior is rewarded or punished control that behavior." — Preceding unsigned comment added by 86.139.192.79 ( talk) 14:06, 27 August 2021 (UTC)
The tree at the top of the page does not fit appropriately on mobile. This includes browser and app. Shaunlilan ( talk) 05:43, 14 October 2021 (UTC)
This article was the subject of a Wiki Education Foundation-supported course assignment, between 14 January 2019 and 8 May 2019. Further details are available
on the course page. Student editor(s):
JasmineHutson21.
Above undated message substituted from Template:Dashboard.wikiedu.org assignment by PrimeBOT ( talk) 05:44, 17 January 2022 (UTC)
Why no mention of ethical alternatives? Of connections to Nazi practices? Of use by kiddy groomers? Of coercion and battery? Of dehumanisation? Of autistic monolithic opposition to operant conditioning and ABA as unethical conversion 'therapy' delivered by an outright quack cult?
Oh, the cult runs this page?
Sorry, I will go now. 2407:7000:9C65:5E00:EC95:E3EA:83EF:1F8F ( talk) 07:34, 26 April 2024 (UTC)
This is the
talk page for discussing improvements to the
Operant conditioning article. This is not a forum for general discussion of the article's subject. |
Article policies
|
Find sources: Google ( books · news · scholar · free images · WP refs) · FENS · JSTOR · TWL |
![]() | This ![]() It is of interest to the following WikiProjects: | ||||||||||||||||||||
|
A summary of this article appears in learning. |
What is it that needs simplyfing? Operant conditioning is difficult to understand and does not lend itself to simple explanations. — Preceding unsigned comment added by 193.216.61.5 ( talk) 09:39, 4 September 2005 (UTC)
I don't see what need simplifying. Maybe it's because of my Psychology background that this page seems crystal clear to me. — Preceding unsigned comment added by 216.232.63.213 ( talk) 18:08, 8 November 2005 (UTC)
I believe the article has transposed the definitions for Negative Reenforcement and Negative Punishment. Negative Reenforcement is the removal (negative) of a reenforcing stimulus (such as a child's toy) to discourage a behavior. Negative Punishment is the removal (negative) of a punishing stimulus (such as a loud noise) to encourage a behavior.
I haven't edited the article because I may be missing something. — Preceding unsigned comment added by 155.76.223.253 ( talk) 19:51, 10 November 2005 (UTC)
"Negative Reenforcement is the removal (negative) of a reenforcing stimulus (such as a child's toy) to discourage a behavior."
"Negative Punishment is the removal (negative) of a punishing stimulus (such as a loud noise) to encourage a behavior."
As an attorney, my interest is in discipline (help) rather than punishment (harm). I consider the best example of negative punishment to be the suspension of social and economic privileges. This is different from positive punishment which causes harm and encourages retaliation. In discipline, the suspension of privileges can be restored as milestones are met (negative reinforcement). Both styles of punishment discourage related behavior and both styles of reinforcement encourage and direct desired behavior. --Eugene Patrick Devany-- blogging on Quora — Preceding unsigned comment added by EugenePatrickDevany ( talk • contribs) 22:09, 25 July 2022 (UTC)
Santaduck 03:13, 20 January 2006 (UTC)
At first it said the person worked w/ cats but then it said rats!! Which one is it? — Preceding unsigned comment added by Lilsaalex ( talk • contribs) 15:41, 8 February 2006 (UTC)
I'm curious as to what you guys think was the most effective. And if anyone thinks that which animal you use really matters. As humans, we can say we are the dominant species and it trickles down, but when it gets to the lower brain capacity of different animals, do you think it played a big role? Justin.edwards ( talk) 02:12, 11 December 2017 (UTC)
The consequences link doesn't really make sense. 128.213.28.129 20:43, 15 February 2006 (UTC)
New section includes this paragraph:
So I don't understand what the point is--that allowing the dog to indulge prey drive when they do something correct is NOT a positive reinforcement? It seems to me like it is. Dog does the weave poles really fast, they get the tug toy. Dog doesn't go as fast, dog doesn't get to play tug. How is that not a positive reinforcer? Elf | Talk 00:55, 24 February 2006 (UTC)
The section on prey drive is inconsistent with the rest of the article. Not everyone agrees that tracking or working dogs have to be rewarded every time; this is more the author's bias than fact, especially without seeing any citations. It is stated that prey drive is an example of an exception to operant conditioning. This is conjecture, as again no sources are cited. Giving the toy or throwing the ball is an addition of something the animal wants - therefore it is positive reinforcement. Even though this is not a food reward, it is a conditioned reinforcer. If the animal does something correctly, it is given this reinforcement. We really don't care why the animal wants the reward. The fact that it works for the reward makes it operant conditioning. — Preceding unsigned comment added by 148.168.40.4 ( talk) 19:06, 7 July 2006 (UTC)
OK, going by a suggestion on the new contributor's question page, I'm going to lay out what I think should be done with this section. The whole "drawbacks and limitations" section needs to be redone. Obviously, Behavior Analysis tends to draw a lot of ire and so the popular insistence for such a section, no matter how badly done, is very strong. However, the opening paragraph on the "drawbacks" section illustrates this problem nicely. A Nobel laureate is cited as stating that operant conditioning doesn't take into account "fixed" reflexes, yet in the very same paragraph we have an explanation (though incomplete) about how operant conditioning isn't supposed to deal with reflexes to begin with because the form of a reflex is, as mentioned, biologically fixed in form, whereas operant behavior is defined as behavior whose form is modifiable by consequences. This demonstrates something that BF Skinner himself noted, that a person's criticism of Behavior Analysis is inversely proportional to how much they actually understand it (a phenomenon that also holds true for other scientific models, like Evolution by Natural Selection). I intend to keep that criticism of the Nobel laureate in the article, but expand upon the paragraph to explain Skinner's rationale for not including reflexes as a form of operant behavior.
Also, the entire "prey drive" portion needs to be removed. In its place would be a listing of factors that alter the effectiveness of consequences, factors such as what I previously mentioned about "satiation." It could look like this:
I will wait approximately a week (maybe more) for further feedback about my intended alterations. Afterwards, I will see how much of what I have included above I will implement. Lunar Spectrum 05:17, 30 August 2006 (UTC)
Nearly a month has passed and there is no comment about my suggestion. I think I will simply add what I have outlined above in a new section and deal with the prey-drive section some other time. -- Lunar Spectrum | Talk 02:05, 26 September 2006 (UTC)
I am not sure if I should have done it or not; perhaps I ought to wait and think and reflect before I edit, but the section that mentioned prey drive was SO far outside of the article that I rewrote it so that it has something to do with the discussion of FAPS versus OC. Whomever wrote the one that I edited out doesn't understand OC or FAPS but would surely like to convince the rest of the world that using prey drive is a valid method of training. At best, it is sloppy terminology that doens't really belong in any training program that is developed using operant conditioning as its model. So, if I went over board in my edits, I do appologize, however, the first bit was really, really bad. I am conducting a workshop this weekend on operant conditioning, and I will go back and areferences after the workshop is done; sorry but I am swamped at the moment. Suenestnature ( talk) 05:24, 2 January 2009 (UTC)suenestnature Suenestnature ( talk) 05:24, 2 January 2009 (UTC)
I was reading the section in the article on Thorndike and his theories, and noticed that there was a passing reference to Skinner's research on reinforcement, no doubt to do with the "Skinner Box" experiment. Considering he was one of the greatest researchers in this area of psychology, could a section be added to explain the principals and methodology of the experiment? 58.169.141.5 23:46, 28 April 2006 (UTC) Nick
For what it's worth, note in passing that Karen Pryor: Don't Shoot the Dog! defines negative reinforcement and punishment differently. To Pryor, the main difference is timing. A negative reinforcement is something disagreeable that the subject can immediately stop by changing his behavior. A punishment is something that happens later that the subject cannot immediately stop by changing his behavior. If Auntie frowns when I put my feet on the coffee table, and stops frowning when I take them off, that is what Pryor calls a negative reinforcement. If I get a bad grade on my report card that reflects all the work I haven't done in class this year, that is what Pryor calls a punishment. Pryor notes that even though punishment is everyone's favorite method of untraining unwanted behavior, it rarely works because the subject usually has difficulty connecting the punishment with the behavior; often, the subject learns to evade punishment instead.
The behaviorist psychologist H. J. Eysenck talks in similar terms in his book Psychology Is About People, Chapter 3. He insists on talking about positive and negative reinforcement instead of reward and punishment, despite the clumsiness of his preferred terms, because with rewards and punishments the timing may make it difficult for the subject to connect the result with the behavior. — Preceding unsigned comment added by 4.232.102.216 ( talk) 20:28, 7 May 2006 (UTC)
For what it's worth, it's all too nit picky, if you want to be pure, non-implicative animal behaviorists, P+, P-, R+, R- is this simple.
if a child screams, a parent picks them up, and the child stops screaming, like it or not that is P+. If you expanded the time line and looked at recurring behavior, you might see increases in intensity and frequency and then it you understandably label it R+, however purely analytically, you cannot imply pain and reward into P and R just because we think rats like cheese or dislike tail shocks. We cannot know for 100% certainty the intentions of an animal and their perceptions. We can only observe what causes behavior to go up and down. Dogs and Cats sometimes love to be pet, other times it's very punishing to them. R- is a big annoyance for me because people always use examples of physically aversive loud noises, ear pinches, etc., and while it's often the case, we are already analyzing the reinforcing agent due to our own conditioned emotional responses. Let's look at a receptionist at a doctor's office. She puts out candy and people smile more in the office so she puts candy out every day after that. Now was she reinforced by the increase of smiles (R+) or was she reinforced by the removal of frowns (R-)? It completely depends on the individuals temperament and you would have to ask her, hey, what do you like more, no frowns or smiles? Animals can't speak to us on those terms so we cannot assume what the likely reinforcer is. All punishments and reinforcements have this duality. Did the rat get reinforced by cheese because they like cheese or did the rat get reinforced by the loss of hunger? Does a child in timeout curb undesirable behavior because they lost the ability to play with friends? Or do they curb that behavior because they don't like the time out room or stool? Typically we argue it's the loss of the ability to play that makes a timeout P-, however the emotional response associated with the time out room or stool may actually cause the child to respond stronger to the instruments of the time out then the loss of opportunity, making it P+.
There are many times with animals that non-physically aversive stimuli are punishing, and non-physically rewarding stimuli are reinforcing, because you cannot dismiss the effect of Pavlov in understanding an animals conditioned emotional responses to stimuli. Some kids like time-outs, some people enjoy cutting themselves, so cluttering up Operant Conditioning with words like "rewarding" and "aversive" are anecdotal and non-scientific, as well as making it more confusing to Billy and Susie. Yes R- is often easily viewed as an escape, but that is not it's definition and can cause confusion in the less plastic mind. PB- 11/7/10 11:11PST —Preceding unsigned comment added by 98.247.244.101 ( talk) 19:18, 7 November 2010 (UTC)
I'm not too sure what goes ineffective when extinction occurs. I assume its the reward (the pellet)... but then it seems like the behavior became extinct. Regardless, I'm confused and this paragraph ought to be clarified.
Extinction is a related term that occurs when a behavior (response) that had previously been reinforced is no longer effective. In the Skinner box experiment, this is the rat pushing the lever and being rewarded with a food pellet several times, and then pushing the lever again and never receiving a food pellet again. Eventually the rat would cease pushing the lever.
I would also explain in the intro that Operant Conditioning is not absolute - it doesn't ensure that the subject will always perform a task (as using the prey drive I gather does.) That little factoid came out of the blue in that section. — Preceding unsigned comment added by 69.109.181.222 ( talk) 09:11, 27 July 2006 (UTC)
In the section "Factors that alter the effectiveness of consequences" I included the mention of how certain factors are the result of biology. For example, I mentioned that the principles of Immediacy and Contingency are the result of the statistical probability of dopamine to modify the appropriate synapses. However, the necessity of an entire section devoted to the biological basis of operant procedures is becoming clear. I used the dopamine reference only to support the section about "Factors that alter the effectiveness of consequences," but already more biological references have been added to that section. They are good references and should be kept, but they should be moved to their own section because they do not contribute anything to the subject of the section they are currently in.
I think that the biological section should be the second section, placed right after the "Reinforcement, punishment, and extinction" section. It would be a good way to structure the article to first have exposition on reinforcement, punishment, and extinction procedures, then have a three-part section immediately following it to explain the neurophysiological effects of reinforcing stimulation, aversive stimulation, and extinction. An alternative to this might be to simply add such a discussion to each of the existing corresponding articles on reinforcement, punishment, and extinction. -- Lunar Spectrum | Talk 00:31, 29 September 2006 (UTC)
Useful info on both articles... Schedule of reinforcement should not be an article. Reinforcement probably shouldnt be - both should redirect here. —The preceding unsigned comment was added by Thuglas ( talk • contribs) 05:21, 2 March 2007 (UTC).
I was kinda thinking that so i put a second link to merge schedules of reinforcement into reinforcment. perhaps a little thing on extrinsic and intrinsic reinforcement and secondary/primary reinforcement could be added thuglas T| C 17:37, 2 March 2007 (UTC)
yeah i think that would work primary means food or something secondary means money the differences between extrinsic/intrinsic and primary are very little, but for some reason they remain seperate in my mind
i figure if noone complains in a week or so we should go ahead and be WP:bold ive posted the link on WP psych. i dont think anyone would disagree with this idea. — Preceding unsigned comment added by Thuglas ( talk • contribs) 18:02, 3 March 2007 (UTC)
I think we were on the same page here, but to clarify: I know that secondary/primary reinforcers are not synomous with intrinsic or extrinsic. I think extrinsic, intrinsic, secondary, and primary reinforcement would all fit into the article. (i havent looked at it in a while i just dont like being misunderstood) thuglas T| C 15:13, 7 August 2007 (UTC)
Looking at the articles in question, I think that the Reinforcement article should not be merged into Operant Conditioning. The Reinforcement article has a good level of detail that makes itself stand as an article on its own. Adding to that the proposal to merge Schedules of Reinforcement into Reinforcement, and the amount of redundant content would bog down the entire article. I think that elements of the Schedule of Reinforcement article can be successfully merged into Reinforcement. But Operant Conditioning already does enough of an overview of reinforcement not to warrant Reinforcement being merged into it. That would detract from the broader focus of the Operant Conditioning article, which should be more about the modification of behavior (operant procedures) rather than the details about the tool used to modify behavior (reinforcement). Lunar Spectrum | Talk 01:11, 14 March 2007 (UTC)
Having lot of material on reinforcement in the operant conditioning makes the article too large. Reinforcement deserves separate section than operant conditioning. Rather than a mergefrom the reinforcement, i suggest that appropriate sections be merged to reinforcement. The two articles - Schedule of reinforcement and reinforcement can be merged together Kpmiyapuram 12:17, 10 April 2007 (UTC)
This section currently appears to have material that fits for "biological correlates of classical conditioning" and not those of operant conditioning. Kpmiyapuram 13:51, 11 April 2007 (UTC)
There is some material on Extinction (psychology) in a separate article but i see that the current article on operant conditioning discusses it at more length. perhaps the information could be reorganized or merged. Kpmiyapuram 14:18, 24 April 2007 (UTC)
I don't think it's accurate to relate Thorndike to Operant Conditioning. Skinner's operant was "discovered" by him alone. Thorndike used different terms and explanatory systems. This is very important. Lots of people examined learning in humans and animals before Skinner. None of that was "operant conditioning" because it relied on mediating structures ('expectations', 'drives', etc). The explanatory system is as important as the actual data (perhaps even more so).
Operants were also quantified in the operant chamber - Skinner's invention - which Thorndike did not use.
Moreover it implies that Skinner's position is just another learning theory, and it is not. This is an attempt to rewrite in the dead theories of Thorndike as "operant" theories which have become popular, or scientifically validated. Thorndike was important in his little way. Put his theories on his own page, or change the name of the page to "instrumental learning". Operant = Skinner != Thorndike.
(-Florkle!)
—The preceding unsigned comment was added by Florkle ( talk • contribs) 07:25, 16 May 2007 (UTC).
I have added a refutation of the thorndike extension article and cited Chiesa. This whole article is problematic in its treatment of reinforcement theory which is not very "clean" in its presentation.
Moreover the digression into the neurochemistry of reinforcement is something that Skinner has rejected since 1938 when he dismissed physiological explanations as appealing to a "conceptual nervous system (CNS)" and later.
-- Florkle 06:23, 17 May 2007 (UTC)
Why are the sections "verbal behavior" and "four term contingency" at the beginning of the article? The latter seems unneeded and the former seems like it should go much later, if at all. And why do we have this paragraph arguing that Skinner's work wasn't based on Thorndike's? Is this information relevant to discussing what operant conditioning is? If anything, I think that should be moved to a separate history section. I'm also surprised reinforcement learning isn't linked in this article, but I'll toss that into the "see also" section now... digfarenough ( talk) 13:53, 17 May 2007 (UTC)
I also think the new additions disrupt the flow of the article. They certainly might have their place somehwere in it, but right now it seems a bit random. And it also seems that the biological section was moved from 3rd section to, apparently, the very last??? To my thinking, the biology section should be near the beginning since despite being the most heavily disparaged area of psychology, operant conditioning is more solidly grounded in biology than anything else in the field. So I think having that biological basis close to the top is important for the credibility of the subject matter. I think an appropriate structure to the article would be 1. history 2. basics 3. biological underpinnings 4. plus various other special topics. Lunar Spectrum | Talk 00:18, 18 May 2007 (UTC)
Additionally, I think a special section on verbal behavior should have to clearly explain how an understanding of verbal operants extends from operant conditioning, which it presently does not accomplish. It can be done (I'd have to look over some of my old notes and google for some sources), but as an advanced topic it should go somewhere towards the end. Theoretical extentions of operant conditioning, like Skinner's Verbal Behavior, should not greatly detract from the focus of this particular article: namely, operant conditioning procedures, which are factual experimental findings. And it's certainly not a "theory" of operant conditioning... no more than a physicist would call the laws of kinematics a "theory" of kinematics. Lunar Spectrum | Talk 00:18, 18 May 2007 (UTC)
And having checked on the article for Verbal Behavior, I'm now concerned about NPOV issues regarding the user who made the recent section changes in the Operant conditioning article. In the talk page for Verbal Behavior he recently states that he has "nuked all references to Chomsky's" review. Now, I may think that Chomksy's review is completely flawed. But for historical reasons, his review is appropriate subject matter for that article. It would be like having a biography on Abraham Lincoln without mentioning John Wilkes Boothe. Anyway, I'm restoring the biological section to its original place in the article and moving some other stuff down to the bottom until it can be worked out. Lunar Spectrum | Talk 00:18, 18 May 2007 (UTC)
It's a complete myth that Skinner rejected biology's role in behavior. It's true that Skinner was opposed to giving explanatory status to unknown mediating constructs. For example, Chomsky coming along and saying "environment can't explain verbal behavior, therefore I will invent an imaginary Language Acquiring Device and claim it exists somewhere in the brain." That is the kind of hypothetical mediationism that Skinner was against, when people pull mediating constructs out of nowhere. There's a recent article explaining Skinner's regard for biology's role in behavior in The Behavior Analyst. Even more recently is a good 2007 article outlining current research about the relationship between biology and the three-term contingency [1]. The simple fact of the matter is that neurology is the hardware of organic "learning machines." To deny that stimuli and responses are transmitted along neurons and modified at the synaptic level would be rediculous. Consider how over a hundred years ago Darwin had en enitrely environmental account of evolution (natural selection). He had no biological mechanism to explain how variation occured and how traits were passed on. He only knew that it happened, and he had strong evidence for it. Then with the discovery of DNA, Darwin's model of evolutionary change was justified because DNA behaves in exactly the way that Darwin's model predicted. Skinner's behavior analysis is much the same way. His model of learning is being justified by biological findings and biology will ultimately be what redeems behavior analysis as a "hard" science separate from psychology. Furthermore, it's very important to note that Skinner is not the be-all end-all of behavior analysis. To treat it as such is to group it with all the other dead models populating psychology texts. A living and breathing science has the ability to expand and further clarify its subject of study. Lunar Spectrum | Talk 19:44, 26 May 2007 (UTC)
There's been an interesting addition to the article in the form of a "further reading" section. It's an article that purports that cognition is a mediating influence on behavior under classical and operant conditioning procedures. Of course, the idea that cognition plays a role as a mediator of behavior goes against the radical behaviorist position that cognition is itself a form of behavior subject to the same laws as overt behavior, no more and no less. The authors go on to build a case (one that I don't consider convincing) using past research to support their assertion. For example, they claim that if behavior is affected by consequences, then it must be "goal-oriented" and that "expectancies" must be involved and that, therefore, this means cognition governs behavior. This is a clear example of invoking unseen causal agents. They also cite research on rat maze running whose results they interpret to mean that rats form "cognitive maps" instead of learned responses, such as the case in which a rat has learned to run a maze, then during a new trial when a path is blocked the rat uses a parallel path as an alternative, even though the rat has not learned to use that alternative parallel path. I think this does not exclude, to my satisfaction, the influence of the rat's past acquired history of navigational repertoires upon the behavior seen in the experiment. Another area the authors cite is a 1974 review by William Brewer which investigates the effects of informed consequences upon human behavior. These are cases in which neutral stimuli have acquired reinforcing or punishing functions upon a subject's behavior without any conditioning taking place. All of the Brewer (1974) examples, as far as I can see, can easily be explained by stimulus equivalence in which new stimulus functions can emerge through membership in an equivalence relation, which is a thoroughly behavior analytic area of research. Understandably, Brewer (1974) couldn't have known about stimulus equivalence as a behavioral explanation for the results he was seeing... but I think more should be expected of the present authors. It goes on to cite Rescorla (1988) which, for all intensive purposes, seems to be based upon a complete misrepresentation of the behavioral account of contingency. He claims that behaviorists view the degree of stimulus control exerted by a CS as determined by the number of CS>US pairings (which is not true of behaviorists) and goes on to state that he has "discovered" that the true relationship is the predictive value of the CS (which is what behaviorists already consider to be true). He states that behaviorists are therefore wrong (according to his understanding of behaviorism) and that there must therefore be some kind of "goal-directed" cognition going on to account for it.
I could really go on and on... and maybe I'm making a mountain out of a mole hill, but I think this reference really doesn't belong here. I guess I could remove it without much fuss, but considering the level of misunderstanding of behavior analysis among cognitivists/constructivists I could easily see how simply removing it might elicit the reaction that I was removing fair criticism of behavior analysis. Maybe if we left the reference in the article, it could instead be a blue-print for elements of conditioning that could be further addressed in the body of the article itself? At the least it would be nice for others to review the reference themselves before having it removed. What do you think? Lunar Spectrum | Talk 04:13, 1 June 2007 (UTC)
Hi - I kind of think parts of the article sound very defensive and somebody is getting rather uptight about the Skinner/Thorndike debate. I think credit is less important than making sure the point of the article is clear and explains what the current understanding of operant conditioning IS rather than making the article all messy about who made up what and so on. If I want to know who came up with what I don't think I'd come to Wikipedia to get that info. — Preceding unsigned comment added by 203.173.169.91 ( talk) 21:17, 25 June 2007 (UTC)
I have never heard of this term unitl now and its only existence seems to be in Wiki-world forms. I would not be in favor of a link to it on the Operant Condtioning Page.( Mcole13 ( talk) 17:45, 14 July 2008 (UTC))
I am interested in cases in which this has been used on humans for psychological treatment. Despite the effectiveness on pigeons and other less intelligent mammals I find it difficult to imagine with accuracy how operant conditioning could be used for aversion therapy. Links would be ideal. 96.49.141.252 ( talk) 06:06, 3 July 2009 (UTC)
The introduction is too technical and focuses mostly on describing what Operant conditioning is *not*, i.e. it is not classical conditioning, instead of on what it *is*. Could someone with the knowledge in the field write a better introduction and push the details clarifying the distinction with classical conditioning to the body of the article? -- NavarroJ ( talk) 18:09, 3 June 2010 (UTC)
A colleague made
a valuable correction, but followed it quickly with a mistaken edit that i've reverted.
The language they replaced -- "(commonly seen as pleasant)" and "(commonly seen as unpleasant) -- is deficient, but the reverted replacement was much worse, for breaking the desirable parallelism, for equating human attitudes to conditioning phenomena, and for ignoring the low correlation of unpleasantness to negative reinforcement (which parallels the low correlation of pleasantness to positive reinforcement). The problems this presents include:
The language i've restored can be improved upon, starting with taking this into account:
(In an article on the psychology of conditioning and learning, the whole notion of relevance of any(un-)pleasantness other than that in humans corrupts the unassailable status of experimental psychology as science, and drags in the irrelevant arguments like whether there is such a thing as "
what it is like to be a bat".) And the revision i reverted was a step away from, rather than toward, what we need.
--
Jerzy•
t
05:51, 30 July 2010 (UTC)
I've made a minor edit to try to capture at least some of these concerns about the previous wording. Please edit, rather than revert to the previous, if unhappy: At the very least the parenthses would need to be removed because they significantly altered the intended meaning of the sentences. Personally, I think it is important that the layperson is able to understand these basic concepts of reward and punishment, even if at the expense of some philosophical preciseness. Excuse me if I'm not in line with the Wikipedia vision in this view - I make relatively few contributions. But if I notice a section of an article is essentially unreadable I usually try to make some minimum corrections to fix that. Interlope ( talk) 00:32, 2 August 2010 (UTC)
The section on immediacy doesn't have a citation, but here's a potential one. I don't know how to put it in myself but here's the link: http://www.sciencedirect.com/science/article/pii/S037663570400169X JDWLB ( talk) 13:06, 2 June 2011 (UTC)
The article currently states that "instrumental learning was first extensively studied by Jerzy Konorski and next by Edward L. Thorndike." But Thorndike published his work on the law of effect in 1905, and Konorski wasn't even born until 1903. Something is amiss... — Preceding unsigned comment added by 24.42.228.249 ( talk) 18:13, 2 March 2014 (UTC)
At the top of the page, a figure of a tree structure of conditioning is presented. I identified a typing error. "Appetative" should be"Appetitive" See Webster's Collegiate Dictionary. Aartsj ( talk) 07:55, 27 July 2014 (UTC)
This article seems all very theoretical and mostly focused on human behavior. What I was trying to look up was a mention of "operant conditioning" as one of the things zoo interns/volunteers are trained in. Google eventually told me that the conditioning is applied to the animals instead of the humans, to teach them to cooperate with routine health care, transfers between enclosures, and the like. Perhaps a new section in the article is called for? 64.93.124.227 ( talk) 03:18, 12 March 2015 (UTC)
I like the idea as well. It's not difficult to relate all the animals in the world may have interactions with human being's learning techniques and methods. Human are more advanced 'animal' since they have cognitive and analytical ability while they confront of issues. However the operant conditioning should apply for both on animals and human beings. So I think having animal training section should be definitely a plus! — Preceding unsigned comment added by Huskyqqq ( talk • contribs) 06:34, 29 November 2017 (UTC)
Yeah! I was wondering this myself and curious how it could be developed. The dynamic between the animal and a human is so interesting especially thinking about it in a zoo way. I'm sure a lot of things happen with animals at zoo's that people don't even realize are part of operant conditioning. Simply being able to feed an animal can be valuable to figuring out more about this theory pertaining to other animals. Justin.edwards ( talk) 02:18, 11 December 2017 (UTC)
I added more information about the operant conditioning chamber to the part on Skinner. I included some of the initial tasks such as task 1, which is isolating an individual piece of behavior to see how it could be changed. I also mentioned how the variable ratio schedule plays into human gambling problems. I found this important to add while also very interesting. I am curious to learn more about the effects the variable ratio schedule in terms of human gambling. Klaska 24 ( talk) 05:54, 31 October 2017 (UTC)Kelly (klaska_24)
I find this to be very interesting as well and think you have a great point. I wonder if the ratio is completely consistent to slot machines. Something even deeper I think would be fun to research is would humans still want to gamble on the slot machines if the odds didn't have a variable ratio schedule. And if the conditioning of even one win could get them to come back to the casino. Justin.edwards ( talk) 03:08, 11 December 2017 (UTC)
As people giving more intermittent reinforcement of reward or punishment, the pace of people changing their emotional bonds could be dramatically different. So really look out for the Traumatic bonding. I feel like this is a sensitive topic somehow but I want to come up with more examples based on it since there is nothing too much. Traumatic bonding could be applied onto a lot of cases. — Preceding unsigned comment added by Huskyqqq ( talk • contribs) 04:48, 5 December 2017 (UTC)
I added some things about the law of affect as it pertains to operant conditioning. I thought it could use an example without having to leave the page. I think it also ties into other sections so it fits well and keeps the wiki page smooth. Justin.edwards ( talk) 03:04, 11 December 2017 (UTC)
Justin, Good idea. I am glad that you added an example so that users wouldn't have to be redirected to another page. Klaska 24 ( talk) 15:24, 12 December 2017 (UTC)klaska_24
I added a paragraph to the 'Praise' section discussing studies done on the efficacy of Cognitive-Behavioral therapy and Operant-Behavioral therapy. It was touched on earlier in the section but I thought it would be good to elaborate on it. Klaska 24 ( talk) 15:21, 12 December 2017 (UTC)Klaska_24
According to the introduction operant conditioning is the same as contingency management but there is a separate article under that heading and presumably duplication of material. I suggest the two either be combined or the differences be clarified for the reader. I know nothing about the material so can’t work on it but a page I am editing on refers to both which brought me here to understand the difference. It’s just confused be more uNfortunately. Hopefully editors here will be able to sort it out. Dakinijones ( talk) 23:09, 15 January 2020 (UTC)
"In operant conditioning, stimuli present when a behavior that is rewarded or punished controls that behavior."
I cannot parse this sentence. What is the subject of 'controls'? Stimuli present? Then it should be 'control' not 'controls'. Why is there a 'that' after 'behavior'? I can't even tell what the sentence is trying to say. — Preceding unsigned comment added by 86.139.192.79 ( talk) 07:20, 27 August 2021 (UTC)
You say the sentence is grammatical but it is not.
'Stimuli' is a plural noun, and therefore requires the 3rd person plural 'control' as in 'they control', as opposed to the 3rd person singular 'controls' as in 'he controls'.
Furthermore, the 'that' should not be there. 'That' sets up 'is rewarded or punished' as a subordinate clause, with the result that 'behavior' should then be the subject of 'controls', which it is not.
The correct grammatical sentence would be:
"In operant conditioning, stimuli present when a behavior is rewarded or punished control that behavior." — Preceding unsigned comment added by 86.139.192.79 ( talk) 14:06, 27 August 2021 (UTC)
The tree at the top of the page does not fit appropriately on mobile. This includes browser and app. Shaunlilan ( talk) 05:43, 14 October 2021 (UTC)
This article was the subject of a Wiki Education Foundation-supported course assignment, between 14 January 2019 and 8 May 2019. Further details are available
on the course page. Student editor(s):
JasmineHutson21.
Above undated message substituted from Template:Dashboard.wikiedu.org assignment by PrimeBOT ( talk) 05:44, 17 January 2022 (UTC)
Why no mention of ethical alternatives? Of connections to Nazi practices? Of use by kiddy groomers? Of coercion and battery? Of dehumanisation? Of autistic monolithic opposition to operant conditioning and ABA as unethical conversion 'therapy' delivered by an outright quack cult?
Oh, the cult runs this page?
Sorry, I will go now. 2407:7000:9C65:5E00:EC95:E3EA:83EF:1F8F ( talk) 07:34, 26 April 2024 (UTC)