This is the
talk page for discussing improvements to the
Large language models page. |
|
Archives:
Index,
1,
2,
3,
4,
5,
6,
7Auto-archiving period: 14 days
![]() |
![]() |
Daily pageviews of Wikipedia:Large language models
A graph should have been displayed here but
graphs are temporarily disabled. Until they are enabled again, visit the interactive graph at
pageviews.wmcloud.org |
![]() | This project page has been
mentioned by multiple media organizations:
|
![]() | AI Cleanup | |||
|
This section is pinned and will not be automatically archived. |
I wrote a user script called WikiChatbot. It works by selecting text in an article and then clicking one of the buttons on the right to enquire about the selected text. It includes many functions. For example, it can summarize and copyedit the selected text, explain it, and provide examples. The chat panel can also be used to ask specific questions about the selected text or the topic in general. The script uses the AI model GPT 3.5. It requires an API key from OpenAI. New OpenAI accounts can use it freely for the first 3 months with certain limitations. For a more detailed description of all these issues and examples of how the script can be used, see the documentation at User:Phlsph7/WikiChatbot.
I was hoping to get some feedback on the script in general and how it may be improved. I tried to follow WP:LLM in writing the documentation of the chatbot. It would be helpful if someone could take a look to ensure that it is understandable and that the limitations and dangers are properly presented. I also added some examples of how to use edit summaries to declare LLM usage. These suggestions should be checked. Feel free to edit the documentation page directly for any minor issues. I'm also not sure how difficult it is to follow the instructions so it would be great if someone could try to set up the script, use it, and explain which steps were confusing. My OpenAI account is already older than 3 months so I was not able to verify the claims about the free period and how severe the limitations are. If someone has a younger account or is willing to open a new account to try it, that would be helpful.
Other feedback on the idea in general, on its problems, or on new features to implement is also welcome. Phlsph7 ( talk) 12:45, 12 July 2023 (UTC)
If we're going to allow LLM editing, this is a great tool to guide editors to the specific use cases that have community approval (even if those use cases are few to none at this point). I found it to be straightforward and easy to use. – dlthewave ☎ 16:06, 23 July 2023 (UTC)
https://www.futurehouse.org/wikicrow
Welp. Sandizer ( talk) 18:41, 13 December 2023 (UTC)
Raw chatgpt assist
|
---|
Question to ChatGPT: "write a list of activities that can be assisted by LLMs, to write encyclopedias, in order of riskiness". ChatGPT's response appeared to have errors e.g. drafting likely more risky than language refinement, but ChatGPT said the opposite. "Assisting in the generation of citations and references" is almost halluncinatory? ChatGPT response:Sure, it's important to note that language models like GPT-3.5, often referred to as large language models (LLMs), can be powerful tools but may not always provide accurate or up-to-date information. The riskiness of using LLMs for certain activities can vary, and it's crucial to verify the information obtained from these models. Here's a list of activities related to writing encyclopedias, ordered from lower to higher perceived risk: Gathering Basic Information: Summarizing established facts and information. Compiling lists of well-known data. Creating Outlines: Generating initial outlines for encyclopedia articles. Providing structure and key points for various topics. Generating Drafts: Assisting in drafting sections of encyclopedia entries. Offering ideas and context for historical events or scientific concepts. Language Refinement: Improving the readability and coherence of drafted content. Suggesting alternative phrasing for clarity. Explaining Concepts: Providing explanations for complex concepts in simple terms. Offering background information on specific subjects. Citing Sources and References: Assisting in the generation of citations and references. Helping with proper attribution of information. Fact-Checking: Cross-referencing information with reliable sources. Verifying the accuracy of historical or scientific details. In-Depth Research Assistance: Assisting in more comprehensive research on specific topics. Offering insights into less well-known or niche subjects. Handling Controversial Topics: Providing information on controversial issues with caution. Giving balanced perspectives and highlighting multiple viewpoints. Ethical Considerations: Assessing and addressing ethical concerns in content creation. Ensuring that information presented is unbiased and fair. Up-to-Date Information: Verifying the recency of information to ensure it is current. Handling topics that may evolve rapidly with care. Legal and Sensitivity Issues: Avoiding the generation of content that may violate laws or ethical guidelines. Exercising caution when dealing with sensitive or potentially harmful topics. Remember that while LLMs can be valuable tools, they are not infallible, and human oversight is essential. Critical evaluation, fact-checking, and ensuring compliance with ethical and legal standards should always be part of the content creation process. |
Grateful for corrections. This list, bot-assisted, has been revised by a human. It estimates riskiness from least to most.
Tom B ( talk) 16:23, 20 December 2023 (UTC)
You are invited to join the discussion at
Wikipedia talk:WikiProject AI Cleanup § Proposal: adopting WP:LLM as this WikiProject's WP:ADVICEPAGE.
Queen
of
Hearts 21:37, 9 January 2024 (UTC)
Blog post description, GitHub repo, based on this Stanford work, which also has a repo, and a live working demo.
Have people noticed those kind of articles? The outline structure is more distinctive and flamboyant than we usually see from human editors. 141.239.252.245 ( talk) 05:43, 26 April 2024 (UTC)
This is the
talk page for discussing improvements to the
Large language models page. |
|
Archives:
Index,
1,
2,
3,
4,
5,
6,
7Auto-archiving period: 14 days
![]() |
![]() |
Daily pageviews of Wikipedia:Large language models
A graph should have been displayed here but
graphs are temporarily disabled. Until they are enabled again, visit the interactive graph at
pageviews.wmcloud.org |
![]() | This project page has been
mentioned by multiple media organizations:
|
![]() | AI Cleanup | |||
|
This section is pinned and will not be automatically archived. |
I wrote a user script called WikiChatbot. It works by selecting text in an article and then clicking one of the buttons on the right to enquire about the selected text. It includes many functions. For example, it can summarize and copyedit the selected text, explain it, and provide examples. The chat panel can also be used to ask specific questions about the selected text or the topic in general. The script uses the AI model GPT 3.5. It requires an API key from OpenAI. New OpenAI accounts can use it freely for the first 3 months with certain limitations. For a more detailed description of all these issues and examples of how the script can be used, see the documentation at User:Phlsph7/WikiChatbot.
I was hoping to get some feedback on the script in general and how it may be improved. I tried to follow WP:LLM in writing the documentation of the chatbot. It would be helpful if someone could take a look to ensure that it is understandable and that the limitations and dangers are properly presented. I also added some examples of how to use edit summaries to declare LLM usage. These suggestions should be checked. Feel free to edit the documentation page directly for any minor issues. I'm also not sure how difficult it is to follow the instructions so it would be great if someone could try to set up the script, use it, and explain which steps were confusing. My OpenAI account is already older than 3 months so I was not able to verify the claims about the free period and how severe the limitations are. If someone has a younger account or is willing to open a new account to try it, that would be helpful.
Other feedback on the idea in general, on its problems, or on new features to implement is also welcome. Phlsph7 ( talk) 12:45, 12 July 2023 (UTC)
If we're going to allow LLM editing, this is a great tool to guide editors to the specific use cases that have community approval (even if those use cases are few to none at this point). I found it to be straightforward and easy to use. – dlthewave ☎ 16:06, 23 July 2023 (UTC)
https://www.futurehouse.org/wikicrow
Welp. Sandizer ( talk) 18:41, 13 December 2023 (UTC)
Raw chatgpt assist
|
---|
Question to ChatGPT: "write a list of activities that can be assisted by LLMs, to write encyclopedias, in order of riskiness". ChatGPT's response appeared to have errors e.g. drafting likely more risky than language refinement, but ChatGPT said the opposite. "Assisting in the generation of citations and references" is almost halluncinatory? ChatGPT response:Sure, it's important to note that language models like GPT-3.5, often referred to as large language models (LLMs), can be powerful tools but may not always provide accurate or up-to-date information. The riskiness of using LLMs for certain activities can vary, and it's crucial to verify the information obtained from these models. Here's a list of activities related to writing encyclopedias, ordered from lower to higher perceived risk: Gathering Basic Information: Summarizing established facts and information. Compiling lists of well-known data. Creating Outlines: Generating initial outlines for encyclopedia articles. Providing structure and key points for various topics. Generating Drafts: Assisting in drafting sections of encyclopedia entries. Offering ideas and context for historical events or scientific concepts. Language Refinement: Improving the readability and coherence of drafted content. Suggesting alternative phrasing for clarity. Explaining Concepts: Providing explanations for complex concepts in simple terms. Offering background information on specific subjects. Citing Sources and References: Assisting in the generation of citations and references. Helping with proper attribution of information. Fact-Checking: Cross-referencing information with reliable sources. Verifying the accuracy of historical or scientific details. In-Depth Research Assistance: Assisting in more comprehensive research on specific topics. Offering insights into less well-known or niche subjects. Handling Controversial Topics: Providing information on controversial issues with caution. Giving balanced perspectives and highlighting multiple viewpoints. Ethical Considerations: Assessing and addressing ethical concerns in content creation. Ensuring that information presented is unbiased and fair. Up-to-Date Information: Verifying the recency of information to ensure it is current. Handling topics that may evolve rapidly with care. Legal and Sensitivity Issues: Avoiding the generation of content that may violate laws or ethical guidelines. Exercising caution when dealing with sensitive or potentially harmful topics. Remember that while LLMs can be valuable tools, they are not infallible, and human oversight is essential. Critical evaluation, fact-checking, and ensuring compliance with ethical and legal standards should always be part of the content creation process. |
Grateful for corrections. This list, bot-assisted, has been revised by a human. It estimates riskiness from least to most.
Tom B ( talk) 16:23, 20 December 2023 (UTC)
You are invited to join the discussion at
Wikipedia talk:WikiProject AI Cleanup § Proposal: adopting WP:LLM as this WikiProject's WP:ADVICEPAGE.
Queen
of
Hearts 21:37, 9 January 2024 (UTC)
Blog post description, GitHub repo, based on this Stanford work, which also has a repo, and a live working demo.
Have people noticed those kind of articles? The outline structure is more distinctive and flamboyant than we usually see from human editors. 141.239.252.245 ( talk) 05:43, 26 April 2024 (UTC)