Are there no circumstances in which we want to keep the talk page for an article name, even in the absence of the article? I'm thinking in particular of some articles which have been though VfD and reappeared ... whilst I appreciate that the discussion might not be relevant if the new article was radically different, but equally the talk might cover the subject area, or would relate to the article should the same article be reposted. Just a thought. I think I have Mike Church's wretched game in mind, though I've forgotten its title. -- Tagishsimon (talk)
What should I do when I find a false positive (like Talk:Annie Chapman/delete or Talk:Annotated Lyrics to The Vicar of Bray/Delete)? Both are misplaced VfD pages (the VfD page is a redirect to them), suggesting they are old archived delete debates (from before the current VfD structure). Should I edit the list and remove them? -- cesarb 5 July 2005 22:34 (UTC)
In User:R3m0t/Reports/1 0 it looks to me that pages like Talk:1940 Pulitzer Prize and Talk:1979_NHL_Entry_Draft are placeholders for their respective projects, and the article page has just not been written yet.
Should we leave these alone? -- ssd 11:45, 22 July 2005 (UTC)
The query: (slightly refined form the one I just used for this dump)
CREATE TABLE `nstalk` ( `main` tinyint(3) unsigned NOT NULL, `talk` tinyint(3) unsigned NOT NULL ); INSERT INTO `nstalk` (`main`, `talk`) VALUES (0, 1), (4, 5), (6, 7), (8, 9), (10, 11), (12, 13), (14, 15); SELECT p1.page_namespace, p1.page_title FROM nstalk LEFT JOIN page p1 ON p1.page_namespace = nstalk.talk LEFT JOIN page p2 ON p2.page_namespace = nstalk.main AND p2.page_title = p1.page_title wHERE p2.page_namespace IS NULL AND p1.page_title NOT LIKE '%/Archive%' AND p1.page_title NOT LIKE '%/to_do' AND p1.page_title NOT LIKE '%/archive%';
Notice the user and user_talk namespaces (2 and 3) are omitted.
You can then pipe that query into mysql, using the -s option. Pipe the output to output.txt. Then:
sed -e 's/^/#[[{{ns:/g' -e 's/\t/}}:/g' -e 's/$/]]/g' output.txt > output2.txt wc -l output2.txt split -l 1000 output2.txt
(If you don't have commands such as sed and split, try out Linux, or at least get Cygwin).
The last command creates files named xaa, xab, etc. These contain 1,000 rows of links each. They are like the ones I have uploaded.
Just open these files in your favourite tabbed text editor and paste them into edit boxes on Wikipedia.
An easy way to add sections every 50 lines or so would be much appreciated. And of course, the sed can probably be changed into a simple script file. r3m0t talk 22:00, 5 April 2006 (UTC)
#!/usr/bin/perl -w while(<>) { print "==" . ($. - 1) / 50 . "==\n\n" if ($. - 1) % 50 == 0; print; }
==0== 51 lines... ==1== etc
I hope these are carefully considered and excluded from these deletions. -- Centrx 01:15, 7 June 2006 (UTC)
Any chance of an update? This is far too out of date to be helpful anymore. VegaDark 21:39, 28 September 2006 (UTC)
Are there no circumstances in which we want to keep the talk page for an article name, even in the absence of the article? I'm thinking in particular of some articles which have been though VfD and reappeared ... whilst I appreciate that the discussion might not be relevant if the new article was radically different, but equally the talk might cover the subject area, or would relate to the article should the same article be reposted. Just a thought. I think I have Mike Church's wretched game in mind, though I've forgotten its title. -- Tagishsimon (talk)
What should I do when I find a false positive (like Talk:Annie Chapman/delete or Talk:Annotated Lyrics to The Vicar of Bray/Delete)? Both are misplaced VfD pages (the VfD page is a redirect to them), suggesting they are old archived delete debates (from before the current VfD structure). Should I edit the list and remove them? -- cesarb 5 July 2005 22:34 (UTC)
In User:R3m0t/Reports/1 0 it looks to me that pages like Talk:1940 Pulitzer Prize and Talk:1979_NHL_Entry_Draft are placeholders for their respective projects, and the article page has just not been written yet.
Should we leave these alone? -- ssd 11:45, 22 July 2005 (UTC)
The query: (slightly refined form the one I just used for this dump)
CREATE TABLE `nstalk` ( `main` tinyint(3) unsigned NOT NULL, `talk` tinyint(3) unsigned NOT NULL ); INSERT INTO `nstalk` (`main`, `talk`) VALUES (0, 1), (4, 5), (6, 7), (8, 9), (10, 11), (12, 13), (14, 15); SELECT p1.page_namespace, p1.page_title FROM nstalk LEFT JOIN page p1 ON p1.page_namespace = nstalk.talk LEFT JOIN page p2 ON p2.page_namespace = nstalk.main AND p2.page_title = p1.page_title wHERE p2.page_namespace IS NULL AND p1.page_title NOT LIKE '%/Archive%' AND p1.page_title NOT LIKE '%/to_do' AND p1.page_title NOT LIKE '%/archive%';
Notice the user and user_talk namespaces (2 and 3) are omitted.
You can then pipe that query into mysql, using the -s option. Pipe the output to output.txt. Then:
sed -e 's/^/#[[{{ns:/g' -e 's/\t/}}:/g' -e 's/$/]]/g' output.txt > output2.txt wc -l output2.txt split -l 1000 output2.txt
(If you don't have commands such as sed and split, try out Linux, or at least get Cygwin).
The last command creates files named xaa, xab, etc. These contain 1,000 rows of links each. They are like the ones I have uploaded.
Just open these files in your favourite tabbed text editor and paste them into edit boxes on Wikipedia.
An easy way to add sections every 50 lines or so would be much appreciated. And of course, the sed can probably be changed into a simple script file. r3m0t talk 22:00, 5 April 2006 (UTC)
#!/usr/bin/perl -w while(<>) { print "==" . ($. - 1) / 50 . "==\n\n" if ($. - 1) % 50 == 0; print; }
==0== 51 lines... ==1== etc
I hope these are carefully considered and excluded from these deletions. -- Centrx 01:15, 7 June 2006 (UTC)
Any chance of an update? This is far too out of date to be helpful anymore. VegaDark 21:39, 28 September 2006 (UTC)