This is an archive of past discussions. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 1 | Archive 2 | Archive 3 | Archive 4 | Archive 5 |
I give the AI a shot to look for sources by searching in similar way to Google in my attempt to give an idea how useful it is to provide sources for citation needed tags. But what I noticed is that AI-generated answers grabbed information from Wikipedia itself, which is bad, that's why you have to be careful to use Perplexity AI to find sources for Wikipedia articles. 2001:448A:3046:34F2:B825:A231:DC69:F03F ( talk) 03:05, 29 January 2023 (UTC)
— Preceding unsigned comment added by 2001:448A:3046:34F2:B825:A231:DC69:F03F ( talk) 03:27, 29 January 2023 (UTC)
@ Rsjaffe and JPxG: I believe that these inaccuracies can also apply to ChatGPT integration of Bing even if it hasn't been out yet. Also, some sources on Perplexity AI results are unreliable or even listed on perennial sources list as I said before in ANI thread. 2001:448A:3046:34F2:B825:A231:DC69:F03F ( talk) 04:06, 29 January 2023 (UTC)
The historic 'Powder House' was built in 1818 as a state gunpowder store and is considered to have historical significance, with a citation to a WordPress blog talking about a different location entirely (Powder House Island was constructed by a private company in 1881). jp× g 08:40, 29 January 2023 (UTC)
This below is my second attempt at crafting more accurate answers by editing sources. Note that not all results includes Add Sources button on this aforementioned conversational search engine. Results before editing is here above results after editing.
Prompt: " an algorithm to simulate cling and skids in billiards"
Result (before editing): " An algorithm to simulate cling and skids in billiards requires coordination of equations and data access for each simulated component[1][2]. Cling, skid, or kick is a "bad hit" resulting from an excessive amount of throw[3]. This has nothing to do with pool tables, 16 balls, 2 cues, 1 chalk and 1 triangle[4], or slang words for "toilet billiards"[5]."
Sources:
Result (after editing): "There is only one search result for this question, which is a definition of the term "cling" in billiards[1]. No algorithm was provided."
Sources (after editing):
WP:DENY |
---|
The following discussion has been closed. Please do not modify it. |
@ Rsjaffe, The Transhumanist, and JPxG: Any thoughts on editing sources feature on Perplexity AI? Currently, adding sources is limited since it doesn't have a capability to provide typing in any URL you want in adding "sources", which could argue it is a double-edged sword (the ability to craft even more accurate answers at the cost of adding room for misuse) right? 2001:448A:3046:59C8:9074:C5DB:F266:4DEE ( talk) 02:53, 26 February 2023 (UTC) |
I had a brief simple conversation with ChatGPT that I thought would be interesting to see. My prompt has a bullet and the reply by ChatGPT is indented.
ChatGPT goes ahead and writes an article even though it warned me in other questions that it is not useable for Wikipedia
|
---|
Nematodes, also known as roundworms, are a group of parasitic and free-living worms that are among the most abundant multicellular organisms on Earth. They are found in nearly every habitat, including soil, freshwater, and marine environments, as well as in the tissues of plants and animals. Nematodes are elongated, cylindrical worms that typically range from 0.5 to 2 millimeters in length, although some species can grow to be several meters long. They have a simple digestive system, nervous system, and reproductive system, and move using a characteristic undulating motion. Nematodes play important roles in various ecological and agricultural systems. Some species are beneficial, serving as decomposers of organic matter and as predators of other soil-dwelling organisms, while others are harmful pests that can cause damage to crops and other plants. For example, the root-knot nematode is a major pest of crops such as cotton, tomatoes, and peanuts, while the soybean cyst nematode is a destructive parasite of soybean crops. In medicine, nematodes are important parasites of humans and animals, causing diseases such as elephantiasis and river blindness. They can also be used as experimental organisms in biological research, as they have simple anatomy and physiology, and can be easily cultured and manipulated in the laboratory. References: M. W. Adams, “Nematoda,” in The Biology of Nematodes, edited by D. L. Lee et al. (Taylor & Francis, 1998), pp. 3–21. J. R. Grewal, “Nematode Parasites of Crops,” Annual Review of Phytopathology, vol. 48, no. 1 (2010), pp. 365–390. J. B. Bloomquist and J. R. Grewal, “Entomopathogenic Nematodes,” Annual Review of Entomology, vol. 55 (2010), pp. 271–292. |
— rsjaffe 🗣️ 02:59, 30 January 2023 (UTC)
@ Rsjaffe: Interesting what Chat-GPT had to say about it writing articles for Wikipedia. If you ask it for its sources on that opinion, will it provide them to you? — The Transhumanist 05:10, 30 January 2023 (UTC)
My knowledge and statement about the existence of Wikipedia articles written by ChatGPT is based on my training data and the information provided by OpenAI, the organization responsible for developing the language model technology that I am based on. OpenAI has publicly discussed the capabilities of their language models, including GPT-3, and how they can be used to generate text that resembles various types of written content, such as articles, poems, and answers to questions. However, I do not have any specific references or sources to cite regarding the writing of Wikipedia articles by GPT models.— rsjaffe 🗣️ 05:22, 30 January 2023 (UTC)
These responses read like they were hard-coded by ChatGPT's developers. If you troll it with inappropriate or unethical inputs, the output switches to this type of boilerplate lecture tone. I've even seen it happen in real time: One day it gladly wrote a press release defending a murderer, the next it refused along with an explanation of why it couldn't. But of course the model doesn't "know" any of this; in this case it appears the devs programmed it to explain why it's unsuitable for Wikipedia articles but not to actually refuse to write them. And if you think about it, there's likely very little content about ChatGPT's own capabilities in its pre-2021 training model, since these things weren't really being discussed until after it was released. Any self-awareness would have had to have been be hard-coded. It's important not to let this give us a false sense of confidence that it will refuse to write anything beyond its abilities, since the model itself doesn't actually know what its limits are. – dlthewave ☎ 20:11, 30 January 2023 (UTC)
This proposed policy seems all fine and good... until when hypothetically Wikipedia produces its own LLM for use. This may not be that far off as it seems as the Abstract Wikipedia are considering adopting some kind of natural language generation to generate articles. CactiStaccingCrane 10:44, 30 January 2023 (UTC)
The current draft and this discussion is already too long and so violates WP:CREEP. To get a more coherent draft, I naturally asked ChatGPT:
Q. Please write a policy for the use of large language models on Wikipedia
A.
Wikipedia Policy for the Use of Large Language Models
Purpose: To ensure the accuracy and impartiality of information on Wikipedia through responsible use of large language models.
Definitions: A "large language model" refers to an artificial intelligence model with the capacity to generate human-like text.
Use of Large Language Models:
- a. Large language models may be used to improve the accuracy and completeness of Wikipedia articles, but may not be used to generate new articles.
- b. Any contributions made by a large language model must be reviewed and approved by a human editor.
- c. The use of large language models must be disclosed on the talk page of the article.
- d. Large language models must not be used to engage in disruptive behavior, such as vandalism or propaganda.
Responsibility:
- a. Users employing large language models are responsible for ensuring compliance with this policy.
- b. The Wikimedia Foundation, which operates Wikipedia, is not responsible for the content generated by large language models.
Enforcement: Non-compliance with this policy may result in a range of actions, including but not limited to: warning, temporary or permanent ban from editing, or removal of contributions.
Amendment: This policy may be amended by the Wikimedia Foundation or its designated representatives at any time.
This seems as sensible as any of the human suggestions and it's more succinct. Job done. See also WP:LIGHTBULB.
Andrew🐉( talk) 11:34, 30 January 2023 (UTC)
Andrew Davidson, could you explain why The Wikimedia Foundation, which operates Wikipedia, is not responsible for the content generated by large language models.
is included? I'm not sure that this type of disclaimer is correct or necessary. –
dlthewave
☎ 14:46, 30 January 2023 (UTC)
- Q. Why did you say that the Wikimedia Foundation is not responsible for the content generated by large language models?
- A. The Wikimedia Foundation, which operates Wikipedia, is not responsible for the content generated by large language models because it is generated by artificial intelligence, not by human editors employed or affiliated with the Foundation. The Foundation simply provides the platform for editing and hosting Wikipedia, and users are responsible for their own contributions, including those made with the assistance of large language models. By stating that the Foundation is not responsible for the content generated by large language models, the policy clarifies the division of responsibility and helps to protect the Foundation from potential legal or reputational liability.
I asked Jimbo for the WMF's position on constructive and destructive uses of LLMs, and the availability of relevant technical tools for each case. Below is his complete answer, with minor layout adjustments:
Great - I can't speak for the Foundation at all but I think it's safe to say that in many ways the WMF staff and board are just like everyone else in this community - very interested in the possibilities of constructive roles here, and worried about the risks as well. I suppose what I am saying is that I don't think the WMF *has* a full position yet, nor would I expect them to!
It looks like the conversation there is a good one and people are learning.
Now, I can't speak for the Foundation but I can speak for myself. I'll only speak at the moment about a few positive ideas that I have rather than go into details about the negatives which are huge and which can be summed up prettye easily with "ChatGPT and similar models make stuff up out of thin air which is horrible".
If you go back in the archives here on my talk page (don't bother, as I'll explain enough) there was a discussion about a proposed article that hadn't made it through a new page review. In response to an inquiry about it, I opened up a newspaper archive website (that I pay for personally) and quickly found 10-15 decent sources which could have been used to improve the article. I skimmed each of them just to figure out if I thought the subject was notable or not. I passed along the sources (but they aren't that useful to anyone who doesn't subscribe to a newspaper archive website) because I didn't have time to actually read them carefully enough to improve the original stub article.
Now, ChatGPT does not have the ability to follow a URL. Also, the archives are in jpeg format, so ChatGPT would not be able to read a download of it, and I don't have any easy way to do image-to-text. (It would be faster to just read and write the articles in this case). But imagine that those minor technical limitations were removed in some way. Suppose I could say: "Hey, ChatGPT, here's a Wikipedia stub. And here are 15 links to sources that I, an experienced Wikipedian, judge to be relevant. Please read these articles and add facts from them to the article, adhering to Wikipedia policies and writing in a typical Wikipedia style. Don't make anything up that isn't clearly in the articles."
That doesn't strike me as a super far-fetched use of this technology. It would then require me to read the output, check that nothing was made up out of thin air, and to make sure it wasn't getting it wrong in some other way. But I suspect this would be a productivity boost for us. And if not today, then in 3 years? 5 years?
I can think of similar use cases. "Here's a Wikipedia entry. Follow all the links to sources and read them. Find sentences in this entry which are in disagreement with what the sources say, if any." "Here's a paragraph from Wikipedia. Someone has complained that the article introduces a subtle bias not found in the original sources. Check the sources and rewrite the article to more closely comply with NPOV policies."
In each case don't imagine some automatic result, just think about whether this might be useful to good editors in at least some cases. It's hard to see that it wouldn't be.
-- Jimbo Wales ( talk) 13:45, 30 January 2023 (UTC)
François Robere ( talk) 14:02, 30 January 2023 (UTC)
While I believe it is important to have readable succinct policies, in this case, where the rationale for the policy may not be readily apparent to the LLM-naive user, I’d like to see a secondary page to the policy that discusses the rationale, preserving, in spirit, some of the text deleted from the policy itself. — rsjaffe 🗣️ 17:10, 30 January 2023 (UTC)
Regarding this edit: for this to be a definition list, it would have to define plagiarism, verifiability, neutral point of view, and no original research. The text does not do this; it describes considerations for these topics with respect to large language models. I appreciate there is not a lot of good usage of definition lists out there to point to. Nonetheless, definition lists are only semantically appropriate for things like glossaries. isaacl ( talk) 22:16, 30 January 2023 (UTC)
Regarding this edit: I think we should be careful not to mix up plagiarism and copyright violation. Plagiarism is an academic crime when one fails to acknowledge where an idea came from. One can still violate copyright while avoiding plagiarism. The two concepts have some overlap when text is licensed for reuse with a requirement to cite the original source, but we should be careful not to use the two terms interchangeably. isaacl ( talk) 22:22, 30 January 2023 (UTC)
I feel like this page is a bit harsh on the usage of LLMs; if they generate *perfect* text, it should be okay to verbatim copy it, especially if you're an experienced editor. Thoughts welcome! EpicPupper ( talk) 03:58, 31 January 2023 (UTC)
While a migrated discussion close was made on the discussion at the Village Pump, it doesn't appear anything regarding it was then discussed over here. Nor of the fact that both polls conducted there on a blanket ban that would result in this page being a policy and enacting a ban on usage of LLMs on talk pages ( [2], [3]) very much did not seem to be supported by the community from the looks of things. I certainly hope people on this talk page above aren't going to ignore the outcomes of those discussions. Silver seren C 23:28, 31 January 2023 (UTC)
Are there any examples of LLMs successfully being used for "Generating ideas for article expansion" and "Asking an LLM for feedback on an existing article" in the Positive uses section? When I tried this out with a few short geography articles, the output was the same "plausible souding nonsense" that we've seen with article generation: Mentioning outdated population figures for a place with no listed population; miscounting the number of references; suggesting things that we don't normally include such as a Conclusion section. And analyzing an entire article is useless with ChatGPT's current length limits. Unless there's a valid way to do this that I'm not seeing, I suggest moving these to Riskier Use Cases. – dlthewave ☎ 16:44, 1 February 2023 (UTC)
For the record: I have split out some of the lengthy explanation of copyright issues to Wikipedia:Large language models and copyright and linked to it from the "Copyright" section. jp× g 01:37, 2 February 2023 (UTC)
Based on my research (toying around with the AI), elicit.org is so much superior to perplexity.ai in that it only searches research papers and summarize the sources using GPT-3. The website does not do any original synthesis like perplexity.ai does, it just merely summarize the abstract to one or two sentence. And to top it all off, the website is governed by a 501(c)3 organization and is being transparent with their work (see https://elicit.org/faq). I think we have a lot to learn from the website about how to use LLMs, how to integrate them to our work, and how to align LLMs to do what we want. CactiStaccingCrane 16:29, 2 February 2023 (UTC)
What are the characteristics of Sabethes Cyaneus?and it summarized one reference as
Sabethes cyaneus is a species of frog. Unfortunately, Sabethes cyaneus is a mosquito. — rsjaffe 🗣️ 19:50, 2 February 2023 (UTC)
App types powered by LLMs, such as chatbots and AI search engines, are not mentioned anywhere in the policy draft. I knew what a chatbot was long before I knew what an LLM was. I used AI search engines long before I knew they were powered by LLMs. "Large language model" is a pretty obscure term. Relying solely on that in the policy would be setting up a trap for those who don't know what it is, even though they are using one unbeknownst to themselves, or who have heard of them, but don't know that one powers a type of app they are using. — The Transhumanist 23:27, 2 February 2023 (UTC)
I mentioned this in the village pump, but while I am generally not pro-LLM -- IMO, none of the "riskier use cases" should go anywhere near Wikipedia -- I do not think it is worthwhile or feasible to disallow LLMs on talk pages or projectspace. Communicating opinion is a far better and less risky use case for LLMs than communicating facts. "Wikipedia editors want to interact with other human, not with large language models" is sentimental, but ultimately meaningless -- LLMs do not spontaneously post on talk pages. It is still a human, using a tool. And the line between a human whose tool is an LLM and a human whose tool is predictive text, editing tools like Grammarly, or the like is not clean and will get blurrier by the day as companies incorporate LLMs into their writing/editing tools to chase that AI gold. There is a near-certain chance that this recommendation will already be obsolete by the time this policy goes live, and a pretty good chance that in a couple years if not sooner, LLMs will be so commonplace that disclosing their use would be about as feasible as disclosing the use of spellcheck. (An example: As of literally today, Microsoft has released a tool to use OpenAI for sales email writing, and reportedly is considering integrating it into Word.) Gnomingstuff ( talk) 02:02, 3 February 2023 (UTC)
Regarding the passage in Wikipedia:Large language models and copyright that "...there are circumstances under which the terms and conditions of an API may cause a company to restrict continued access to the model based on adherence to certain criteria...", note this is also true for initial access to the model. Thus while the sentence from the preceding paragraph is true, "Companies ... do not automatically hold a claim to copyright on all works produced using their products," they can make copyright claims as part of the terms of use, and thus impose licensing terms for use of the output. isaacl ( talk) 17:49, 4 February 2023 (UTC)
Since I have been harping on the idea that this needs a comprehensive umbrella policy and that has garnered some support from others but not gained enough traction to change the trajectory of the policy on this page here, I've gone ahead with a WP:BOLD draft which everyone is invited to contribute to and critique. —DIYeditor ( talk) 07:09, 6 February 2023 (UTC)
As of 8 February ChatGPT is now part of Bing and this makes it very easy for those interested to test its capabilities. For example, the prompt "What is Wikipedia's policy on paid editing" (and similar questions) gives sensible results in a chat format. The prompt "Does Bing use ChatGPT?" gives a simple "Yes" (with links to relevant citations). Mike Turnbull ( talk) 14:36, 10 February 2023 (UTC)
perplexity.ai may soon be obsoleted. — The Transhumanist 10:03, 15 February 2023 (UTC)
If anyone is looking for an example of an editor using ChatGPT to add content to an article, look at the recent history of Assabet Valley Regional Technical High School. I reverted those additions as ChatGPT is not a reliable source. ElKevbo ( talk) 00:01, 15 February 2023 (UTC)
Came across these. — The Transhumanist 07:59, 15 February 2023 (UTC)
They are looking for the earliest AI-generated article on Wikipedia:
Wikipedia:Wikipedia Signpost/2023-02-04/News and notes#Investigative challenge
"What if ChatGPT was a cat":
Do we have any data on whether LLMs are good at paraphrasing while maintaining meaning of a sentence? This might be useful for editing pages tagged Template:Incomprehensible.
I think this has a lot of potential for helping editors with dyslexia or English as a second language write well.
Is there a testing ground here where we can show test examples of what can be done with such models? I'd like to take some incomprehensible articles and attempt to improve them and see if it works vs confabulating facts to make the sentences work together Immanuelle ❤️💚💙 (please tag me) 05:56, 16 February 2023 (UTC)
I think we need to add some parameters to Template:AI-generated so it is more informative. Ideally I think there should be a parameter for the revision, which will default to the revision the template is added with, for self declaring editors. In the future there could be an external tool that would be able to show text added in such a revision so it can be examined in isolation.
Also I think it might be a good practice for self declaring editors to be able to declare the type of edit they made with the LLM. Paraphrasing edits vs edits including new content. Paraphrasing edits are likely much easier to review as they shouldn't be introducing new information, so a reviewer could just go over an edit flagged as a paraphraser with
AutoWikiBrowser and make a judgment call on whether the paraphrasing was legitimate and remove the template if so.
Immanuelle ❤️💚💙
(please tag me) 06:06, 16 February 2023 (UTC)
OpenAI's "publication policy" says:
However, that's a license agreement; not a copyright claim. We've previously assumed that this is required for copyright compliance, but that doesn't appear to be the case. DFlhb ( talk) 12:06, 19 February 2023 (UTC)
In addition to an inline citation, in-text attribution is usually required when quoting or closely paraphrasing source material. See also the section WP:Plagiarism#Avoiding_plagiarism.
"disclosed in a way that no reader could possibly miss". Journalists seem to have adopted a standard of disclosing AI use at the top of the piece as shown in this article from CNET, and I think it would be wise for Wikipedia to do the same. This is new territory for us since we don't normally attribute authorship this way even when copying open-license text verbatim. – dlthewave ☎ 16:22, 19 February 2023 (UTC)
@ Alalch E.: In response to this edit: It seems to me that WP:Plagiarism requires in-text attribution independent of which LLM provider is used, see the passage cited above. This would also apply to the text in the section "Declare LLM use".
I'm not sure that it is a good idea to include the section "Productive uses of LLMs" in this policy. The following two reasons are my main concerns: (1) the section makes false claims and (2) this is supposed to be a policy, not a how-to essay. As for false claims: the section claims that LLMs in general have these uses. But the claims were only tested on ChatGPT, as far as I can tell. I tried some of JPxG's demonstrations (table rotation and plot summary) on perplexity.ai, elicit.org, and distil-gpt2. They all failed for these examples. Since different LLMs are trained for different purposes, it would be rather surprising if you could look at the strengths and weaknesses of one and generalize them to all others. But even when we restrict ourselves to ChatGPT, it is not uncontroversial at which tasks it excels and which tasks are risky, as discussed on the talk page. As for the second reason: I think it would be better to relegate these explanations to a how-to essay and keep the policy slim. I'm not sure what value they provide besides what is already covered in the section "Using LLMs". Phlsph7 ( talk) 09:04, 20 February 2023 (UTC)
The section "LLM risks and pitfalls" currently contains the following paragraph:
As the technology continually advances, it may be claimed that a specific large language model has reached a point where it does, on its own, succeed in outputting text which is compatible with the encyclopedia's requirements, when given a well engineered prompt. However, not everyone will always use the most state-of-the-art and the most Wikipedia-compliant model, while also coming up with suitable prompts; at any given moment, individuals are probably using a range of generations and varieties of the technology, and the generation with regard to which these deficiencies have been recognized by the community may persist, if in lingering form, for a rather long time.
This paragraph speculates on how LLMs may develop in the future. I don't think that it is a good idea to include this paragraph in our policy since this is not relevant to current usage, independently of whether this speculation is true or false.
As Alalch E. has pointed out, one idea behind this paragraph is forestalling editors in a more or less distant future who claim that their LLM is so advanced that it automatically follows all policies. I think the basic idea behind this point is valid. But since the policy covers LLMs in general, it also covers future advanced LLMs and editors using them. If that point needs to be explicitly mentioned then maybe we can find a less verbose way to include it. The alternative would be to wait till that time comes and then to update our policy accordingly. Phlsph7 ( talk) 10:26, 20 February 2023 (UTC)
This policy applies to all usages of LLMs independently of whether a provider or user of an LLM claims that it automatically complies with Wikipedia guidelines.If we want to emphasize future technological developments, we could use
This policy applies to all usages of LLMs independently of whether a provider or user of an LLM claims that, due to technological advances, it automatically complies with Wikipedia guidelines.To me, this seems obvious. But to others, it may not. In that case, it may be good to state it explicitly. Phlsph7 ( talk) 11:54, 20 February 2023 (UTC)
Just noting that grammarly is an LLM I believe. I don't think we have an issue with grammarly. So the issue is probably generating large sections of text. Talpedia ( talk) 12:59, 23 February 2023 (UTC)
I've been thinking of playing with LLMs win wikipedia for a while. I suspect good uses might be for :
It's worth noting that LLMs that are better at sourcing (e.g. RAG) are coming. And there has been some work in automated fact checking.
If people were keen on prototyping some tools for using LLM's *well* for editing. I'd be keen - I can code, have an understanding of some of the libraries. It might be good to work with someone more on the "product" side of stuff to ping ideas off Talpedia ( talk) 13:11, 23 February 2023 (UTC)
... using LLMs to summarize copyrighted content (like news articles) may produce excessively close paraphrases.
Does anyone have an example of this happening? Or introducing bias, OR, or hallucinations when asked to summarize? Sandizer ( talk) 08:13, 24 February 2023 (UTC)
Still don't think this subsection belongs here. The rest of the policy is about editors' conduct, but this is about WP:RS. Shouldn't this be addressed by WP:RS, or WP:RSP, or some other way? LLM-generated sources are already in semi-widespread use, see for example Wikipedia:Reliable_sources/Noticeboard#StatMuse DFlhb ( talk) 13:03, 24 February 2023 (UTC)
(in reverse-chronological order)
Enjoy. — The Transhumanist 07:36, 5 March 2023 (UTC)
Based on reading and thinking about the above reports:
In conclusion, things are progressing rapidly, and a larger wave of LLM-generated content contributions than previously expected is likely to flow into Wikipedia soon. We need to be ready for it. — The Transhumanist 07:36, 5 March 2023 (UTC)
P.S.: @ JPxG, DFlhb, Rsjaffe, Fram, Andrew Davidson, Alalch E., François Robere, Ovinus (alt), Isaacl, EpicPupper, Silver seren, Phlsph7, BlankpopsiclesilviaASHs4, 2001:448A:304A:3A2A:F87F:AE94:6B45:64E1, Gnomingstuff, DIYeditor, Michael D. Turnbull, ElKevbo, Novem Linguae, HaeB, Talpedia, and Sandizer:
@ JPxG, DFlhb, Rsjaffe, Fram, Andrew Davidson, Alalch E., François Robere, Ovinus (alt), Isaacl, EpicPupper, Silver seren, Phlsph7, BlankpopsiclesilviaASHs4, 2001:448A:304A:3A2A:F87F:AE94:6B45:64E1, Gnomingstuff, DIYeditor, Michael D. Turnbull, ElKevbo, Novem Linguae, HaeB, Talpedia, and Sandizer:
Check it out! Here's an article that is in part about the effort to write this guideline. (See the section "What about Wikipedia?")
https://wikiedu.org/blog/2023/02/21/chatgpt-wikipedia-and-student-writing-assignments/
— The Transhumanist 09:18, 1 March 2023 (UTC)
This is an archive of past discussions. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 1 | Archive 2 | Archive 3 | Archive 4 | Archive 5 |
I give the AI a shot to look for sources by searching in similar way to Google in my attempt to give an idea how useful it is to provide sources for citation needed tags. But what I noticed is that AI-generated answers grabbed information from Wikipedia itself, which is bad, that's why you have to be careful to use Perplexity AI to find sources for Wikipedia articles. 2001:448A:3046:34F2:B825:A231:DC69:F03F ( talk) 03:05, 29 January 2023 (UTC)
— Preceding unsigned comment added by 2001:448A:3046:34F2:B825:A231:DC69:F03F ( talk) 03:27, 29 January 2023 (UTC)
@ Rsjaffe and JPxG: I believe that these inaccuracies can also apply to ChatGPT integration of Bing even if it hasn't been out yet. Also, some sources on Perplexity AI results are unreliable or even listed on perennial sources list as I said before in ANI thread. 2001:448A:3046:34F2:B825:A231:DC69:F03F ( talk) 04:06, 29 January 2023 (UTC)
The historic 'Powder House' was built in 1818 as a state gunpowder store and is considered to have historical significance, with a citation to a WordPress blog talking about a different location entirely (Powder House Island was constructed by a private company in 1881). jp× g 08:40, 29 January 2023 (UTC)
This below is my second attempt at crafting more accurate answers by editing sources. Note that not all results includes Add Sources button on this aforementioned conversational search engine. Results before editing is here above results after editing.
Prompt: " an algorithm to simulate cling and skids in billiards"
Result (before editing): " An algorithm to simulate cling and skids in billiards requires coordination of equations and data access for each simulated component[1][2]. Cling, skid, or kick is a "bad hit" resulting from an excessive amount of throw[3]. This has nothing to do with pool tables, 16 balls, 2 cues, 1 chalk and 1 triangle[4], or slang words for "toilet billiards"[5]."
Sources:
Result (after editing): "There is only one search result for this question, which is a definition of the term "cling" in billiards[1]. No algorithm was provided."
Sources (after editing):
WP:DENY |
---|
The following discussion has been closed. Please do not modify it. |
@ Rsjaffe, The Transhumanist, and JPxG: Any thoughts on editing sources feature on Perplexity AI? Currently, adding sources is limited since it doesn't have a capability to provide typing in any URL you want in adding "sources", which could argue it is a double-edged sword (the ability to craft even more accurate answers at the cost of adding room for misuse) right? 2001:448A:3046:59C8:9074:C5DB:F266:4DEE ( talk) 02:53, 26 February 2023 (UTC) |
I had a brief simple conversation with ChatGPT that I thought would be interesting to see. My prompt has a bullet and the reply by ChatGPT is indented.
ChatGPT goes ahead and writes an article even though it warned me in other questions that it is not useable for Wikipedia
|
---|
Nematodes, also known as roundworms, are a group of parasitic and free-living worms that are among the most abundant multicellular organisms on Earth. They are found in nearly every habitat, including soil, freshwater, and marine environments, as well as in the tissues of plants and animals. Nematodes are elongated, cylindrical worms that typically range from 0.5 to 2 millimeters in length, although some species can grow to be several meters long. They have a simple digestive system, nervous system, and reproductive system, and move using a characteristic undulating motion. Nematodes play important roles in various ecological and agricultural systems. Some species are beneficial, serving as decomposers of organic matter and as predators of other soil-dwelling organisms, while others are harmful pests that can cause damage to crops and other plants. For example, the root-knot nematode is a major pest of crops such as cotton, tomatoes, and peanuts, while the soybean cyst nematode is a destructive parasite of soybean crops. In medicine, nematodes are important parasites of humans and animals, causing diseases such as elephantiasis and river blindness. They can also be used as experimental organisms in biological research, as they have simple anatomy and physiology, and can be easily cultured and manipulated in the laboratory. References: M. W. Adams, “Nematoda,” in The Biology of Nematodes, edited by D. L. Lee et al. (Taylor & Francis, 1998), pp. 3–21. J. R. Grewal, “Nematode Parasites of Crops,” Annual Review of Phytopathology, vol. 48, no. 1 (2010), pp. 365–390. J. B. Bloomquist and J. R. Grewal, “Entomopathogenic Nematodes,” Annual Review of Entomology, vol. 55 (2010), pp. 271–292. |
— rsjaffe 🗣️ 02:59, 30 January 2023 (UTC)
@ Rsjaffe: Interesting what Chat-GPT had to say about it writing articles for Wikipedia. If you ask it for its sources on that opinion, will it provide them to you? — The Transhumanist 05:10, 30 January 2023 (UTC)
My knowledge and statement about the existence of Wikipedia articles written by ChatGPT is based on my training data and the information provided by OpenAI, the organization responsible for developing the language model technology that I am based on. OpenAI has publicly discussed the capabilities of their language models, including GPT-3, and how they can be used to generate text that resembles various types of written content, such as articles, poems, and answers to questions. However, I do not have any specific references or sources to cite regarding the writing of Wikipedia articles by GPT models.— rsjaffe 🗣️ 05:22, 30 January 2023 (UTC)
These responses read like they were hard-coded by ChatGPT's developers. If you troll it with inappropriate or unethical inputs, the output switches to this type of boilerplate lecture tone. I've even seen it happen in real time: One day it gladly wrote a press release defending a murderer, the next it refused along with an explanation of why it couldn't. But of course the model doesn't "know" any of this; in this case it appears the devs programmed it to explain why it's unsuitable for Wikipedia articles but not to actually refuse to write them. And if you think about it, there's likely very little content about ChatGPT's own capabilities in its pre-2021 training model, since these things weren't really being discussed until after it was released. Any self-awareness would have had to have been be hard-coded. It's important not to let this give us a false sense of confidence that it will refuse to write anything beyond its abilities, since the model itself doesn't actually know what its limits are. – dlthewave ☎ 20:11, 30 January 2023 (UTC)
This proposed policy seems all fine and good... until when hypothetically Wikipedia produces its own LLM for use. This may not be that far off as it seems as the Abstract Wikipedia are considering adopting some kind of natural language generation to generate articles. CactiStaccingCrane 10:44, 30 January 2023 (UTC)
The current draft and this discussion is already too long and so violates WP:CREEP. To get a more coherent draft, I naturally asked ChatGPT:
Q. Please write a policy for the use of large language models on Wikipedia
A.
Wikipedia Policy for the Use of Large Language Models
Purpose: To ensure the accuracy and impartiality of information on Wikipedia through responsible use of large language models.
Definitions: A "large language model" refers to an artificial intelligence model with the capacity to generate human-like text.
Use of Large Language Models:
- a. Large language models may be used to improve the accuracy and completeness of Wikipedia articles, but may not be used to generate new articles.
- b. Any contributions made by a large language model must be reviewed and approved by a human editor.
- c. The use of large language models must be disclosed on the talk page of the article.
- d. Large language models must not be used to engage in disruptive behavior, such as vandalism or propaganda.
Responsibility:
- a. Users employing large language models are responsible for ensuring compliance with this policy.
- b. The Wikimedia Foundation, which operates Wikipedia, is not responsible for the content generated by large language models.
Enforcement: Non-compliance with this policy may result in a range of actions, including but not limited to: warning, temporary or permanent ban from editing, or removal of contributions.
Amendment: This policy may be amended by the Wikimedia Foundation or its designated representatives at any time.
This seems as sensible as any of the human suggestions and it's more succinct. Job done. See also WP:LIGHTBULB.
Andrew🐉( talk) 11:34, 30 January 2023 (UTC)
Andrew Davidson, could you explain why The Wikimedia Foundation, which operates Wikipedia, is not responsible for the content generated by large language models.
is included? I'm not sure that this type of disclaimer is correct or necessary. –
dlthewave
☎ 14:46, 30 January 2023 (UTC)
- Q. Why did you say that the Wikimedia Foundation is not responsible for the content generated by large language models?
- A. The Wikimedia Foundation, which operates Wikipedia, is not responsible for the content generated by large language models because it is generated by artificial intelligence, not by human editors employed or affiliated with the Foundation. The Foundation simply provides the platform for editing and hosting Wikipedia, and users are responsible for their own contributions, including those made with the assistance of large language models. By stating that the Foundation is not responsible for the content generated by large language models, the policy clarifies the division of responsibility and helps to protect the Foundation from potential legal or reputational liability.
I asked Jimbo for the WMF's position on constructive and destructive uses of LLMs, and the availability of relevant technical tools for each case. Below is his complete answer, with minor layout adjustments:
Great - I can't speak for the Foundation at all but I think it's safe to say that in many ways the WMF staff and board are just like everyone else in this community - very interested in the possibilities of constructive roles here, and worried about the risks as well. I suppose what I am saying is that I don't think the WMF *has* a full position yet, nor would I expect them to!
It looks like the conversation there is a good one and people are learning.
Now, I can't speak for the Foundation but I can speak for myself. I'll only speak at the moment about a few positive ideas that I have rather than go into details about the negatives which are huge and which can be summed up prettye easily with "ChatGPT and similar models make stuff up out of thin air which is horrible".
If you go back in the archives here on my talk page (don't bother, as I'll explain enough) there was a discussion about a proposed article that hadn't made it through a new page review. In response to an inquiry about it, I opened up a newspaper archive website (that I pay for personally) and quickly found 10-15 decent sources which could have been used to improve the article. I skimmed each of them just to figure out if I thought the subject was notable or not. I passed along the sources (but they aren't that useful to anyone who doesn't subscribe to a newspaper archive website) because I didn't have time to actually read them carefully enough to improve the original stub article.
Now, ChatGPT does not have the ability to follow a URL. Also, the archives are in jpeg format, so ChatGPT would not be able to read a download of it, and I don't have any easy way to do image-to-text. (It would be faster to just read and write the articles in this case). But imagine that those minor technical limitations were removed in some way. Suppose I could say: "Hey, ChatGPT, here's a Wikipedia stub. And here are 15 links to sources that I, an experienced Wikipedian, judge to be relevant. Please read these articles and add facts from them to the article, adhering to Wikipedia policies and writing in a typical Wikipedia style. Don't make anything up that isn't clearly in the articles."
That doesn't strike me as a super far-fetched use of this technology. It would then require me to read the output, check that nothing was made up out of thin air, and to make sure it wasn't getting it wrong in some other way. But I suspect this would be a productivity boost for us. And if not today, then in 3 years? 5 years?
I can think of similar use cases. "Here's a Wikipedia entry. Follow all the links to sources and read them. Find sentences in this entry which are in disagreement with what the sources say, if any." "Here's a paragraph from Wikipedia. Someone has complained that the article introduces a subtle bias not found in the original sources. Check the sources and rewrite the article to more closely comply with NPOV policies."
In each case don't imagine some automatic result, just think about whether this might be useful to good editors in at least some cases. It's hard to see that it wouldn't be.
-- Jimbo Wales ( talk) 13:45, 30 January 2023 (UTC)
François Robere ( talk) 14:02, 30 January 2023 (UTC)
While I believe it is important to have readable succinct policies, in this case, where the rationale for the policy may not be readily apparent to the LLM-naive user, I’d like to see a secondary page to the policy that discusses the rationale, preserving, in spirit, some of the text deleted from the policy itself. — rsjaffe 🗣️ 17:10, 30 January 2023 (UTC)
Regarding this edit: for this to be a definition list, it would have to define plagiarism, verifiability, neutral point of view, and no original research. The text does not do this; it describes considerations for these topics with respect to large language models. I appreciate there is not a lot of good usage of definition lists out there to point to. Nonetheless, definition lists are only semantically appropriate for things like glossaries. isaacl ( talk) 22:16, 30 January 2023 (UTC)
Regarding this edit: I think we should be careful not to mix up plagiarism and copyright violation. Plagiarism is an academic crime when one fails to acknowledge where an idea came from. One can still violate copyright while avoiding plagiarism. The two concepts have some overlap when text is licensed for reuse with a requirement to cite the original source, but we should be careful not to use the two terms interchangeably. isaacl ( talk) 22:22, 30 January 2023 (UTC)
I feel like this page is a bit harsh on the usage of LLMs; if they generate *perfect* text, it should be okay to verbatim copy it, especially if you're an experienced editor. Thoughts welcome! EpicPupper ( talk) 03:58, 31 January 2023 (UTC)
While a migrated discussion close was made on the discussion at the Village Pump, it doesn't appear anything regarding it was then discussed over here. Nor of the fact that both polls conducted there on a blanket ban that would result in this page being a policy and enacting a ban on usage of LLMs on talk pages ( [2], [3]) very much did not seem to be supported by the community from the looks of things. I certainly hope people on this talk page above aren't going to ignore the outcomes of those discussions. Silver seren C 23:28, 31 January 2023 (UTC)
Are there any examples of LLMs successfully being used for "Generating ideas for article expansion" and "Asking an LLM for feedback on an existing article" in the Positive uses section? When I tried this out with a few short geography articles, the output was the same "plausible souding nonsense" that we've seen with article generation: Mentioning outdated population figures for a place with no listed population; miscounting the number of references; suggesting things that we don't normally include such as a Conclusion section. And analyzing an entire article is useless with ChatGPT's current length limits. Unless there's a valid way to do this that I'm not seeing, I suggest moving these to Riskier Use Cases. – dlthewave ☎ 16:44, 1 February 2023 (UTC)
For the record: I have split out some of the lengthy explanation of copyright issues to Wikipedia:Large language models and copyright and linked to it from the "Copyright" section. jp× g 01:37, 2 February 2023 (UTC)
Based on my research (toying around with the AI), elicit.org is so much superior to perplexity.ai in that it only searches research papers and summarize the sources using GPT-3. The website does not do any original synthesis like perplexity.ai does, it just merely summarize the abstract to one or two sentence. And to top it all off, the website is governed by a 501(c)3 organization and is being transparent with their work (see https://elicit.org/faq). I think we have a lot to learn from the website about how to use LLMs, how to integrate them to our work, and how to align LLMs to do what we want. CactiStaccingCrane 16:29, 2 February 2023 (UTC)
What are the characteristics of Sabethes Cyaneus?and it summarized one reference as
Sabethes cyaneus is a species of frog. Unfortunately, Sabethes cyaneus is a mosquito. — rsjaffe 🗣️ 19:50, 2 February 2023 (UTC)
App types powered by LLMs, such as chatbots and AI search engines, are not mentioned anywhere in the policy draft. I knew what a chatbot was long before I knew what an LLM was. I used AI search engines long before I knew they were powered by LLMs. "Large language model" is a pretty obscure term. Relying solely on that in the policy would be setting up a trap for those who don't know what it is, even though they are using one unbeknownst to themselves, or who have heard of them, but don't know that one powers a type of app they are using. — The Transhumanist 23:27, 2 February 2023 (UTC)
I mentioned this in the village pump, but while I am generally not pro-LLM -- IMO, none of the "riskier use cases" should go anywhere near Wikipedia -- I do not think it is worthwhile or feasible to disallow LLMs on talk pages or projectspace. Communicating opinion is a far better and less risky use case for LLMs than communicating facts. "Wikipedia editors want to interact with other human, not with large language models" is sentimental, but ultimately meaningless -- LLMs do not spontaneously post on talk pages. It is still a human, using a tool. And the line between a human whose tool is an LLM and a human whose tool is predictive text, editing tools like Grammarly, or the like is not clean and will get blurrier by the day as companies incorporate LLMs into their writing/editing tools to chase that AI gold. There is a near-certain chance that this recommendation will already be obsolete by the time this policy goes live, and a pretty good chance that in a couple years if not sooner, LLMs will be so commonplace that disclosing their use would be about as feasible as disclosing the use of spellcheck. (An example: As of literally today, Microsoft has released a tool to use OpenAI for sales email writing, and reportedly is considering integrating it into Word.) Gnomingstuff ( talk) 02:02, 3 February 2023 (UTC)
Regarding the passage in Wikipedia:Large language models and copyright that "...there are circumstances under which the terms and conditions of an API may cause a company to restrict continued access to the model based on adherence to certain criteria...", note this is also true for initial access to the model. Thus while the sentence from the preceding paragraph is true, "Companies ... do not automatically hold a claim to copyright on all works produced using their products," they can make copyright claims as part of the terms of use, and thus impose licensing terms for use of the output. isaacl ( talk) 17:49, 4 February 2023 (UTC)
Since I have been harping on the idea that this needs a comprehensive umbrella policy and that has garnered some support from others but not gained enough traction to change the trajectory of the policy on this page here, I've gone ahead with a WP:BOLD draft which everyone is invited to contribute to and critique. —DIYeditor ( talk) 07:09, 6 February 2023 (UTC)
As of 8 February ChatGPT is now part of Bing and this makes it very easy for those interested to test its capabilities. For example, the prompt "What is Wikipedia's policy on paid editing" (and similar questions) gives sensible results in a chat format. The prompt "Does Bing use ChatGPT?" gives a simple "Yes" (with links to relevant citations). Mike Turnbull ( talk) 14:36, 10 February 2023 (UTC)
perplexity.ai may soon be obsoleted. — The Transhumanist 10:03, 15 February 2023 (UTC)
If anyone is looking for an example of an editor using ChatGPT to add content to an article, look at the recent history of Assabet Valley Regional Technical High School. I reverted those additions as ChatGPT is not a reliable source. ElKevbo ( talk) 00:01, 15 February 2023 (UTC)
Came across these. — The Transhumanist 07:59, 15 February 2023 (UTC)
They are looking for the earliest AI-generated article on Wikipedia:
Wikipedia:Wikipedia Signpost/2023-02-04/News and notes#Investigative challenge
"What if ChatGPT was a cat":
Do we have any data on whether LLMs are good at paraphrasing while maintaining meaning of a sentence? This might be useful for editing pages tagged Template:Incomprehensible.
I think this has a lot of potential for helping editors with dyslexia or English as a second language write well.
Is there a testing ground here where we can show test examples of what can be done with such models? I'd like to take some incomprehensible articles and attempt to improve them and see if it works vs confabulating facts to make the sentences work together Immanuelle ❤️💚💙 (please tag me) 05:56, 16 February 2023 (UTC)
I think we need to add some parameters to Template:AI-generated so it is more informative. Ideally I think there should be a parameter for the revision, which will default to the revision the template is added with, for self declaring editors. In the future there could be an external tool that would be able to show text added in such a revision so it can be examined in isolation.
Also I think it might be a good practice for self declaring editors to be able to declare the type of edit they made with the LLM. Paraphrasing edits vs edits including new content. Paraphrasing edits are likely much easier to review as they shouldn't be introducing new information, so a reviewer could just go over an edit flagged as a paraphraser with
AutoWikiBrowser and make a judgment call on whether the paraphrasing was legitimate and remove the template if so.
Immanuelle ❤️💚💙
(please tag me) 06:06, 16 February 2023 (UTC)
OpenAI's "publication policy" says:
However, that's a license agreement; not a copyright claim. We've previously assumed that this is required for copyright compliance, but that doesn't appear to be the case. DFlhb ( talk) 12:06, 19 February 2023 (UTC)
In addition to an inline citation, in-text attribution is usually required when quoting or closely paraphrasing source material. See also the section WP:Plagiarism#Avoiding_plagiarism.
"disclosed in a way that no reader could possibly miss". Journalists seem to have adopted a standard of disclosing AI use at the top of the piece as shown in this article from CNET, and I think it would be wise for Wikipedia to do the same. This is new territory for us since we don't normally attribute authorship this way even when copying open-license text verbatim. – dlthewave ☎ 16:22, 19 February 2023 (UTC)
@ Alalch E.: In response to this edit: It seems to me that WP:Plagiarism requires in-text attribution independent of which LLM provider is used, see the passage cited above. This would also apply to the text in the section "Declare LLM use".
I'm not sure that it is a good idea to include the section "Productive uses of LLMs" in this policy. The following two reasons are my main concerns: (1) the section makes false claims and (2) this is supposed to be a policy, not a how-to essay. As for false claims: the section claims that LLMs in general have these uses. But the claims were only tested on ChatGPT, as far as I can tell. I tried some of JPxG's demonstrations (table rotation and plot summary) on perplexity.ai, elicit.org, and distil-gpt2. They all failed for these examples. Since different LLMs are trained for different purposes, it would be rather surprising if you could look at the strengths and weaknesses of one and generalize them to all others. But even when we restrict ourselves to ChatGPT, it is not uncontroversial at which tasks it excels and which tasks are risky, as discussed on the talk page. As for the second reason: I think it would be better to relegate these explanations to a how-to essay and keep the policy slim. I'm not sure what value they provide besides what is already covered in the section "Using LLMs". Phlsph7 ( talk) 09:04, 20 February 2023 (UTC)
The section "LLM risks and pitfalls" currently contains the following paragraph:
As the technology continually advances, it may be claimed that a specific large language model has reached a point where it does, on its own, succeed in outputting text which is compatible with the encyclopedia's requirements, when given a well engineered prompt. However, not everyone will always use the most state-of-the-art and the most Wikipedia-compliant model, while also coming up with suitable prompts; at any given moment, individuals are probably using a range of generations and varieties of the technology, and the generation with regard to which these deficiencies have been recognized by the community may persist, if in lingering form, for a rather long time.
This paragraph speculates on how LLMs may develop in the future. I don't think that it is a good idea to include this paragraph in our policy since this is not relevant to current usage, independently of whether this speculation is true or false.
As Alalch E. has pointed out, one idea behind this paragraph is forestalling editors in a more or less distant future who claim that their LLM is so advanced that it automatically follows all policies. I think the basic idea behind this point is valid. But since the policy covers LLMs in general, it also covers future advanced LLMs and editors using them. If that point needs to be explicitly mentioned then maybe we can find a less verbose way to include it. The alternative would be to wait till that time comes and then to update our policy accordingly. Phlsph7 ( talk) 10:26, 20 February 2023 (UTC)
This policy applies to all usages of LLMs independently of whether a provider or user of an LLM claims that it automatically complies with Wikipedia guidelines.If we want to emphasize future technological developments, we could use
This policy applies to all usages of LLMs independently of whether a provider or user of an LLM claims that, due to technological advances, it automatically complies with Wikipedia guidelines.To me, this seems obvious. But to others, it may not. In that case, it may be good to state it explicitly. Phlsph7 ( talk) 11:54, 20 February 2023 (UTC)
Just noting that grammarly is an LLM I believe. I don't think we have an issue with grammarly. So the issue is probably generating large sections of text. Talpedia ( talk) 12:59, 23 February 2023 (UTC)
I've been thinking of playing with LLMs win wikipedia for a while. I suspect good uses might be for :
It's worth noting that LLMs that are better at sourcing (e.g. RAG) are coming. And there has been some work in automated fact checking.
If people were keen on prototyping some tools for using LLM's *well* for editing. I'd be keen - I can code, have an understanding of some of the libraries. It might be good to work with someone more on the "product" side of stuff to ping ideas off Talpedia ( talk) 13:11, 23 February 2023 (UTC)
... using LLMs to summarize copyrighted content (like news articles) may produce excessively close paraphrases.
Does anyone have an example of this happening? Or introducing bias, OR, or hallucinations when asked to summarize? Sandizer ( talk) 08:13, 24 February 2023 (UTC)
Still don't think this subsection belongs here. The rest of the policy is about editors' conduct, but this is about WP:RS. Shouldn't this be addressed by WP:RS, or WP:RSP, or some other way? LLM-generated sources are already in semi-widespread use, see for example Wikipedia:Reliable_sources/Noticeboard#StatMuse DFlhb ( talk) 13:03, 24 February 2023 (UTC)
(in reverse-chronological order)
Enjoy. — The Transhumanist 07:36, 5 March 2023 (UTC)
Based on reading and thinking about the above reports:
In conclusion, things are progressing rapidly, and a larger wave of LLM-generated content contributions than previously expected is likely to flow into Wikipedia soon. We need to be ready for it. — The Transhumanist 07:36, 5 March 2023 (UTC)
P.S.: @ JPxG, DFlhb, Rsjaffe, Fram, Andrew Davidson, Alalch E., François Robere, Ovinus (alt), Isaacl, EpicPupper, Silver seren, Phlsph7, BlankpopsiclesilviaASHs4, 2001:448A:304A:3A2A:F87F:AE94:6B45:64E1, Gnomingstuff, DIYeditor, Michael D. Turnbull, ElKevbo, Novem Linguae, HaeB, Talpedia, and Sandizer:
@ JPxG, DFlhb, Rsjaffe, Fram, Andrew Davidson, Alalch E., François Robere, Ovinus (alt), Isaacl, EpicPupper, Silver seren, Phlsph7, BlankpopsiclesilviaASHs4, 2001:448A:304A:3A2A:F87F:AE94:6B45:64E1, Gnomingstuff, DIYeditor, Michael D. Turnbull, ElKevbo, Novem Linguae, HaeB, Talpedia, and Sandizer:
Check it out! Here's an article that is in part about the effort to write this guideline. (See the section "What about Wikipedia?")
https://wikiedu.org/blog/2023/02/21/chatgpt-wikipedia-and-student-writing-assignments/
— The Transhumanist 09:18, 1 March 2023 (UTC)