Wikipedia:Bots/Noticeboard/Archive 14
This is an archive of past discussions on Wikipedia:Bots. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current main page. |
Archive 10 | ← | Archive 12 | Archive 13 | Archive 14 | Archive 15 | Archive 16 | → | Archive 19 |
Help with finding a Wikipedia bot
I've just got a new computer. There was a bot that you could manually update all the WikiProjects. The URL was something like: wp10.toolserver.org etc etc. Where can I find it? Adamdaley (talk) 21:20, 30 March 2020 (UTC)
- I don't know what you're looking for, but I'd recommend starting at WP:1.0. --Izno (talk) 23:01, 30 March 2020 (UTC)
Location of bot status report
Hi folks. I am opening this thread here as requested in this BRFA. Currently there is a report about all bots at User:MajavahBot/Bot status report, however I'm not really sure if that is the best location. My suggestion is something under WP:Database reports (like Database reports/Bots), however other ideas are much appreciated. Pinging @Headbomb:. Thanks, – Majavah (t/c) 16:46, 5 May 2020 (UTC)
I recommend that the report use YYYY-MM-DD formatting for dates so that the sorting will work properly.– Jonesey95 (talk) 17:01, 5 May 2020 (UTC)
- @Jonesey95: It already uses
data-sort-value
to properly sort numbers and dates. – Majavah (t/c) 17:05, 5 May 2020 (UTC)- Oof, I am lazy. Never mind. – Jonesey95 (talk) 17:17, 5 May 2020 (UTC)
- @Jonesey95: It already uses
- Off the top of my mind, this isn't really a database report as normally understood. So I feel that something like Wikipedia:Bots/Activity (or possibly usurping Wikipedia:Bots/Status) would be best. Headbomb {t · c · p · b} 17:02, 5 May 2020 (UTC)
- That is a very good point, only reason I chose database reports was that it would be a central place for those all of these reports pages would be located. Both of your suggestions are fine to me. – Majavah (t/c) 17:12, 5 May 2020 (UTC)
Protection question
I notice every once in a while an old/archived BRFA will be edited by some IP vandal mucking about. Most of the time these edits get reverted fairly quickly by watchers (either through watchlists or IRC) but would it make sense to just cut them off at the pass and semi-protect all non-open discussions? Primefac (talk) 18:58, 6 May 2020 (UTC)
- Seems like more work than is worthwhile - would an edit filter be feasible instead? — xaosflux Talk 19:04, 6 May 2020 (UTC)
- Perhaps something like Special:AbuseFilter/973 detecting the closed status in the wikitext DannyS712 (talk) 07:33, 7 May 2020 (UTC)
I may have missed some conversation but this bot is still operating. It’s page says it is inactive.
ListeriaBot (talk · contribs · deleted contribs · blacklist hits · AbuseLog · what links to user page · count · COIBot · Spamcheck · user page logs · x-wiki · status · Edit filter search · Google · StopForumSpam)
—¿philoserf? (talk) 16:12, 8 May 2020 (UTC)
- The bot's operating, but nobody bothered to update it's userpage. * Pppery * it has begun... 16:17, 8 May 2020 (UTC)
New Pywikibot release 3.0.20200508
(Pywikibot) A new pywikibot release 3.0.20200226 was deployed as gerrit Tag and at pypi. It also was marked as „stable“ release and the „python2“ tag. The PAWS web shell depends on this „stable“ tag. The „python2“ tag indicates a Python 2 compatible stable version and should be used by Python 2 users.
Among others the changes includes:
- A translation dict for the
i18n.translate()
function must contain a'en'
or'_default'
key, if it is not a L10N only (Task 220099). test_family.py
has been deleted (Task 228375, Task 228300).tools.ip_regexp
has been removed (Task 174482).- The methods
Page.getVersionHistory()
andPage.fullVersionHistory()
are desupported and should be replaced; they will be removed shortly (Task 151110).
The following code cleanup changes are announced for one of the next releases:
- The methods
Page.getVersionHistory()
andPage.fullVersionHistory()
will be removed and should replaced byPage.revision()
(Task 151110). - Some deprecated compat methods will be removed; they show a
FutureWarning
when used. SkipPageError
exception shouldn't be used any longer; useBaseBot.skip_page()
method instead.- MediaWiki versions prior to (LTS) 1.19 will no loner supported (Task 245350).
- The submodule tools.ip will be deleted (Task 243171).
- Featured articles interwiki link related functions will be desupported.
- Pywikibot will require Python version 3.5 or above (Task 239542, Task 213287)
All changes are visible in the history files, e.g. here
DRN clerk bot - Issues
Hello,
The creator of this bot has received requests on their page (looks like last time they responded to anything on their talk was in 2019). The bot is used to update our template that we monitor and get updates for the cases. Since it is not working correctly, it is not updating the template.
1. It should be changing the status of the case in our table when we change the status on the actual case itself. I wanted to look at the code, but I do not believe I have access to the template to make sure nothing broke and to test it.
2. When it changes status, it should also change the color in our table.
BOT Page: User:DRN_clerk_bot
Owner talk page: Original Post: https://en.wikipedia.org/w/index.php?title=User_talk:Hasteur&oldid=945501872 (it appears someone removed this comment or I overwrote it somehow? User_talk:Hasteur#DRN_clerk_bot_2nd_request
Where they fixed it: User_talk:Hasteur#DRN_Clerk_Bot
Can someone please look into this (or maybe provide me the template code so I can run tests in the sandbox?
Thanks Galendalia CVU Member \ Chat Me Up 18:43, 11 May 2020 (UTC)
Found old IABot problem - where to report?
- Moved from Wikipedia talk:Bots. Headbomb {t · c · p · b} 10:48, 20 May 2020 (UTC)
I found this 2017 IABot edit where it "spammed" a link. Hopefully it's fixed or not longer being used, but I've no idea how to check. --Hipal/Ronz (talk) 17:06, 25 February 2020 (UTC)
- User talk:IABot? Headbomb {t · c · p · b} 19:45, 25 February 2020 (UTC)
- I was expecting something clearly stating that page was for reports about the bot, and the declined bot request has me more confused. I don't know how to track down further information on it, and don't understand how this bot was even allowed to function or if it still is. --Hipal/Ronz (talk) 20:38, 25 February 2020 (UTC)
- @Cyberpower678: might be able to shed some light? --Hipal/Ronz (talk) 20:41, 25 February 2020 (UTC)
- @Hipal: It seems someone tried to create a bot by a similar name. User:InternetArchiveBot is the bot you're looking for. Headbomb {t · c · p · b} 21:53, 25 February 2020 (UTC)
- Thanks. I'll assume cyberpower678 will see this. Hopefully it was fixed long ago. --Hipal/Ronz (talk) 22:42, 25 February 2020 (UTC)
- Hipal, fixed what exactly? —CYBERPOWER (Chat) 00:45, 26 February 2020 (UTC)
- The bot making mistakes like the one I identified with the diff. --Hipal/Ronz (talk) 00:46, 26 February 2020 (UTC)
- Hipal, I’m still waiting for you to identify the mistake —CYBERPOWER (Chat) 00:53, 26 February 2020 (UTC)
- Hipal, you forgot to mention the problem: it is adding archive URL links to
www.nndb.com
where it shouldn't be. Cntrl-F on "www.nndb.com" inside the diff, it becomes more clear. -- GreenC 01:15, 26 February 2020 (UTC)- That's it. The nndb archive url was "spammed" in place of legitimate archive urls in multiple instances. I gave some details here. --Hipal/Ronz (talk) 01:20, 26 February 2020 (UTC)
- This is how I cleaned up after it. --Hipal/Ronz (talk) 01:25, 26 February 2020 (UTC)
- Hipal, ah. I believe that issue was fixed a while ago. —CYBERPOWER (Around) 01:33, 26 February 2020 (UTC)
- I've added a function to WP:WAYBACKMEDIC to log when
|url=
and|archiveurl=
look very different, then see what might be done to fix it automatically if anything depending what the data shows (could be many false positives). -- GreenC 03:24, 26 February 2020 (UTC)
- I've added a function to WP:WAYBACKMEDIC to log when
- Hipal, ah. I believe that issue was fixed a while ago. —CYBERPOWER (Around) 01:33, 26 February 2020 (UTC)
- The bot making mistakes like the one I identified with the diff. --Hipal/Ronz (talk) 00:46, 26 February 2020 (UTC)
- Hipal, fixed what exactly? —CYBERPOWER (Chat) 00:45, 26 February 2020 (UTC)
- Thanks. I'll assume cyberpower678 will see this. Hopefully it was fixed long ago. --Hipal/Ronz (talk) 22:42, 25 February 2020 (UTC)
- @Hipal: It seems someone tried to create a bot by a similar name. User:InternetArchiveBot is the bot you're looking for. Headbomb {t · c · p · b} 21:53, 25 February 2020 (UTC)
Hipal, conversation moved to community at Wikipedia:Village_pump_(technical)/Archive 179#=url_and_=archiveurl_do_not_match -- GreenC 14:08, 18 March 2020 (UTC)
InternetArchiveBot Google Books and Internet Archive
I don't want user:InternetArchiveBot aggressively replace my books.google.com link with internet archive link (slower search), but I'll post at this phabricator thing?--Kiyoweap (talk) 05:36, 19 May 2020 (UTC)
- I concur. The google book links often open to the cited page. The replacement links rarely do in my experience. —¿philoserf? (talk) 02:57, 28 May 2020 (UTC)
Inactive bots - May 2020
The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.
Per the bot policy activity requirements, the following bots will be deauthorized and deflagged in one week. These bots have not made an edit in 2 or more years, nor have their operator made an edit in 2 or more years.
BOT_user_name | BOT_last_edit | Oper_username | Oper_lastedit | Notes |
---|---|---|---|---|
NekoBot | 20111013 | Crashdoom | 20170902 | |
Robert SkyBot | 20120705 | Robert Skyhawk | 20180406 | |
DASHBotAV | 20130106 | Tim1357 | 20180102 | |
VoxelBot | 20140531 | Fox Wilson, Vacation9 | 20171010 / 20170919 | |
JackieBot | 20170204 | Jackie | 20160824 | |
ReferenceBot | 20170219 | A930913 | 20160403 | |
CensusBot | 20170831 | Logan-Census | 20170817 |
Should an operator wish to maintain their bot's status, please place "keep" and your signature in the notes column above for your bot. Deauthorized bots will need to file a new BRFA should they wish to become reactivated in the future. Thank you, — xaosflux Talk 16:03, 6 May 2020 (UTC)
- Required operator talk notices have been left. — xaosflux Talk 16:07, 6 May 2020 (UTC)
- Discuss
- As there has been no response, all of the above bots are being marked retired, and their bot flags removed. If your bot is on this list, and you want to resume operations, please file a new WP:BRFA. — xaosflux Talk 19:08, 28 May 2020 (UTC)
BRFA backlog building up again
As noted by QEDK over at Wikipedia talk:Bots/Requests for approval#BRFA Backlog, there's a backlog building up of requests at WP:BRFA. BAG attention on these when someone has time would be useful - cheers! Naypta ☺ | ✉ talk page | 16:08, 22 May 2020 (UTC)
- @Naypta: I'm working on it --TheSandDoctor Talk 01:45, 28 May 2020 (UTC)
- @Naypta: Looks like we are back under control for the moment. --TheSandDoctor Talk 22:40, 30 May 2020 (UTC)
- @TheSandDoctor: you're a ★ Naypta ☺ | ✉ talk page | 22:44, 30 May 2020 (UTC)
- @Naypta: Looks like we are back under control for the moment. --TheSandDoctor Talk 22:40, 30 May 2020 (UTC)
Deferred changes
I've created a section on Wikipedia:Village pump (technical) regarding deferred changes - a method to allow edit filters, bots and ORES to put edits into a queue for manual review.
Would appreciate your thoughts: Deferred Changes
Thanks, ProcrastinatingReader (talk) 19:57, 17 June 2020 (UTC)
Partial block of dashboard bot
Could the BAG take a look at Wikipedia talk:Dashboard#Bot section? where there is a proposal to give Legobot a partial block from the dashboard. --Trialpears (talk) 22:21, 16 June 2020 (UTC)
- To keep bot-related discussions and potential sanctions in the correct location, I've transcluded the above-linked discussion below. Please discuss any potential sanctions for Legobot after the {{cob}}, be it for partial or full blocking or something else. Discussions about the updating of the Dashboard should of course take place at that location. Primefac (talk) 02:18, 17 June 2020 (UTC)
WT:Dashboard discussion
|
---|
I think it would be best if bots got their own dedicated section here, which would cover
Opinions? Headbomb {t · c · p · b} 17:25, 25 July 2017 (UTC)
Something on Toolforge was borked. It should begin working regularly now. Legoktm (talk) 18:29, 23 August 2017 (UTC)
@Headbomb: The solution to this issue is to rewrite the entire dashboard using Lua, thereby making it reconfigurable without waiting for a bot to be amended and preventing dozens of unnecessary bot edits per day. I've converted the "Administrative noticeboards" section as an example at Special:PermaLink/911258827. * Pppery * it has begun... 15:32, 17 August 2019 (UTC)
@Legoktm: Could you have your bot stop updating Wikipedia:Dashboard subpages so that the above code can be deployed. * Pppery * it has begun... 18:44, 2 November 2019 (UTC) @Legoktm: * Pppery * it has begun... 01:51, 10 January 2020 (UTC)
I disabled Legobot's dashboard task now, thanks Trialpears for the email. I'll comment on WP:BOTN about my (in)activity. Legoktm (talk) 06:01, 18 June 2020 (UTC)
|
I've disabled that bot task, as requested. I'm not as active these days (though I hope it'll go up a bit now that I'm done with school), so if someone wants to take over some tasks and give them love, especially the GA and RFC/FRS parts, that would be nice. I might get to spend some time on them this summer, but I'm not making any promises. Legoktm (talk) 08:29, 18 June 2020 (UTC)
TFD of an unapproved bot updating mainspace content
I submitted a TFD for Wikipedia:Templates_for_discussion/Log/2020_June_30#Template:TotalHumanSpaceFlightByNation which looks to be bot-driven without an approval. Please consider leaving a comment there. --Izno (talk) 16:41, 30 June 2020 (UTC)
Pywikibot release 3.0.20200609
(Pywikibot) Release 3.0.20200609 was deployed as gerrit Tag and at pypi. It also was marked with „python2“ tag and published as „stable“ branch. The PAWS web shell depends on this „stable“ branch. The „python2“ tag indicates a Python 2 compatible stable version and should be used by Python 2 users. Note: this release is the next last one supporting Python 2.
Among others the changes includes:
SkipPageError
exception cannot be used any longer; useBaseBot.skip_page()
method instead.- A scheme may be given with the URL for
pagegenerators
-weblink
option (Task 251308, Task 251310).
The following code cleanup changes are announced for one of the next releases:
- The methods
Page.getVersionHistory()
andPage.fullVersionHistory()
will be removed and should replaced byPage.revision()
(Task 151110). - Some deprecated compat methods will be removed; they show a
FutureWarning
when used. - MediaWiki versions prior to (LTS) 1.19 will no loner supported (Task 245350).
- The submodule tools.ip will be deleted (Task 243171).
- Featured articles interwiki link related functions will be desupported.
- Pywikibot will require Python version 3.5 or above (Task 239542, Task 213287)
All changes are visible in the history files, e.g. here
Software change
The mw:New requirements for user signatures will begin on Monday, 6 July 2020. This should eventually reduce Special:LintErrors and signatures that don't link to the local account.
If you want to know whether your signature (or any individual editor) is okay, you can check your signature at https://signatures.toolforge.org/check Starting Monday, editors will not be able to create new invalid signatures in Special:Preferences, but the old invalid signatures will keep working for a while. Eventually, invalid custom signatures will stop working.
There is more information at WP:VPT at WT:SIG, and you can ask questions at mw:Talk:New requirements for user signatures. Please share this information with anyone whose bots or scripts needs to be able to detect timestamps and signatures. Thanks, Whatamidoing (WMF) (talk) 04:09, 2 July 2020 (UTC)
- When I click on your toolforge link Whatamidoing (WMF) I get the message "Not Found - The requested URL was not found on the server. If you entered the URL manually please check your spelling and try again." MarnetteD|Talk 04:33, 2 July 2020 (UTC)
- URL fixed above. Please try again. – Jonesey95 (talk) 04:43, 2 July 2020 (UTC)
- Thanks for fixing the link. Whatamidoing (WMF) (talk) 19:37, 2 July 2020 (UTC)
- URL fixed above. Please try again. – Jonesey95 (talk) 04:43, 2 July 2020 (UTC)
Block of EmausBot
I just came across User talk:Emaus#Bot fix of redirect with possibilities after noticing that EmausBot had been blocked some time ago (indefinitely since 12 May 2020). WT79 and admin Samsara each reverted perfectly valid and what should have been totally uncontroversial technical edits by Emaus#Bot because of their confusion over the purpose and application of the problematic {{R avoided double redirect}} which seems to encourage the creation of double redirects rather than actual avoidance of them. {{R avoided double redirect}} is a relatively newer redirect template that was created just over five years ago. Here's my explanation of its use, citing an example.
Doctor Marvin Monroe (The Simpsons) redirects to List of recurring The Simpsons characters#Dr. Marvin Monroe. That redirect is tagged with {{R avoided double redirect|Marvin Monroe}}
because it "should" be a redirect to Marvin Monroe but that too is a redirect to List of recurring The Simpsons characters#Dr. Marvin Monroe. It's not currently tagged with {{R with possibilities}} but it could be... indeed the {{R avoided double redirect}} implies that it should be, because the whole point of {{R avoided double redirect}} is that Marvin Monroe has genuine possibilities, indeed the expectation, that some day that there will be a standalone article about the notable fictional character "Marvin Monroe". And that when that day arrives it would be an error demanding prompt attention if Doctor Marvin Monroe (The Simpsons) continued to redirect to List of recurring The Simpsons characters#Dr. Marvin Monroe rather than directly targeting the new standalone article. Never mind that "Marvin Monroe" has history indicating the current consensus that this is not an {{R with possibilities}}! We still need to manage those "avoided double redirects" – even if it means getting into edit wars with previously uncontroversial bots.
Samsara blocked this bot under that rationale that all bots are required to inspect the edit history of a page before they edit it, or maintain a private database of all of their past edits, and to not make the edit a second time if they have previously made the same edit and been reverted by a human – even when the human edit is counter to policy and the bot's edit was correct. I see nothing in Wikipedia:Bot policy#Bot requirements that specifically addresses this, nor any demonstration that the bot has actually done any harm. Emaus hasn't been particularly active here lately, but I'm hoping that if the bot is unblocked it will resume editing. – wbm1058 (talk) 19:06, 14 July 2020 (UTC)
- "that all bots are required to inspect the edit history of a page before they edit it" That is completely untrue. There may be a rationale for the bot to be blocked, but this isn't one, especially if the bot is WP:NOBOTS-compliant. Headbomb {t · c · p · b} 19:22, 14 July 2020 (UTC)
- I am tempted to unblock here. "I want a double redirect" isn't a valid reason to block a bot --Guerillero | Parlez Moi 19:25, 14 July 2020 (UTC)
- @Guerillero: Indeed it is not. WP:DOUBLEREDIRECT is quite clear that these are not acceptable. Headbomb {t · c · p · b} 19:27, 14 July 2020 (UTC)
- I have notified Samsara of this discussion (don't know if they receive pings). Primefac (talk) 20:23, 14 July 2020 (UTC)
- Sorry, I didn't quite understand the policy and posted a comment on the talk page first; I dropped out of the discussion after not very long, before the bot was blocked. WT79 (speak to me | editing patterns | what I been doing) 20:50, 14 July 2020 (UTC)
- I have notified Samsara of this discussion (don't know if they receive pings). Primefac (talk) 20:23, 14 July 2020 (UTC)
- @Guerillero: looking at User talk:Emaus #Bot fix of redirect with possibilities, you'll find that Samsara specifically stated
"The block, however, is for not respecting revert actions, which should always trigger desisting or review. So when that is addressed, I will unblock"
. I don't agree with you that "wanting a double redirect" was the reason for the block, and there is confirmation of that in the bot's block log. --RexxS (talk) 21:17, 14 July 2020 (UTC)- He was reverting to restore an improper double redirect. --Guerillero | Parlez Moi 21:20, 14 July 2020 (UTC)
- Quite likely, but that's not the point. No bot should edit-war with editors, regardless of which one is right. You can explain to an editor if they make a mistake, but you have no similar recourse when the bot makes a mistake, which is why we don't expect it to be edit-warring. --RexxS (talk) 21:43, 14 July 2020 (UTC)
- I don't think there's many bots that have that level of intelligence, nor is it required by policy to my knowledge. If someone wants to keep a page in a state that triggers a maintenance task, the onus probably rests with them to use the nobots template properly to ward off any bots authorized to perform that maintenance. I support unblocking. –xenotalk 23:24, 14 July 2020 (UTC)
- Quite likely, but that's not the point. No bot should edit-war with editors, regardless of which one is right. You can explain to an editor if they make a mistake, but you have no similar recourse when the bot makes a mistake, which is why we don't expect it to be edit-warring. --RexxS (talk) 21:43, 14 July 2020 (UTC)
- He was reverting to restore an improper double redirect. --Guerillero | Parlez Moi 21:20, 14 July 2020 (UTC)
- @Guerillero: Indeed it is not. WP:DOUBLEREDIRECT is quite clear that these are not acceptable. Headbomb {t · c · p · b} 19:27, 14 July 2020 (UTC)
- I am tempted to unblock here. "I want a double redirect" isn't a valid reason to block a bot --Guerillero | Parlez Moi 19:25, 14 July 2020 (UTC)
- So there are a couple of items I'd like to be sure are clear: (1) Is this bot making edits not approved by its BRFA, or is this a new additional concern? (2) What is the response of the operator, who is ultimately responsible for every edit made by the bot. — xaosflux Talk 22:04, 14 July 2020 (UTC)
- Perhaps add: (3) If Samsara were to accept that the bot was correct in making the first edit, would they now be willing to unblock? --RexxS (talk) 22:13, 14 July 2020 (UTC)
- To answer the first two questions, on the two pages in question (1, 2) the bot was operating within its parameters; Emaus replied in a manner indicating that. As far as the third question goes, that point is moot (see my reply below). Primefac (talk) 23:46, 14 July 2020 (UTC)
- Perhaps add: (3) If Samsara were to accept that the bot was correct in making the first edit, would they now be willing to unblock? --RexxS (talk) 22:13, 14 July 2020 (UTC)
- The applicable BRFA is Wikipedia:Bots/Requests for approval/EmausBot 2 – which was approved 9 1⁄2 years ago, so it seems odd that it would just now have problems after all these years, no?
- The BRFA asserts that it uses mw:Manual:Pywikibot/redirect.py – is that not bot-compliant? wbm1058 (talk) 22:20, 14 July 2020 (UTC)
- @Wbm1058: the software a bot uses is useful to know, but doesn't make it compliant or not compliant - what does is (a) is it following the scope of the approve task and (b) have there been any community standard since enacted that would require it to reduce or cease the task? — xaosflux Talk 22:59, 14 July 2020 (UTC)
- As far as "exclusion compliance" that this bot purports to be, if the bot was reverted - and an exclusion was asserted (e.g. {{nobots}}) that this bot is not complying with - that is a blockable malfunction. — xaosflux Talk 23:00, 14 July 2020 (UTC)
- Sure, but the relevant edit history clearly shows use of WP:Rollback to revert the bot. Hard to use {{nobots}} kryptonite to thwart a bot when you're pressing that button. wbm1058 (talk) 23:13, 14 July 2020 (UTC)
- As far as "exclusion compliance" that this bot purports to be, if the bot was reverted - and an exclusion was asserted (e.g. {{nobots}}) that this bot is not complying with - that is a blockable malfunction. — xaosflux Talk 23:00, 14 July 2020 (UTC)
- Pywikibot's default scripts should be nobots compliant by default. Legoktm (talk) 01:21, 15 July 2020 (UTC)
- @Wbm1058: the software a bot uses is useful to know, but doesn't make it compliant or not compliant - what does is (a) is it following the scope of the approve task and (b) have there been any community standard since enacted that would require it to reduce or cease the task? — xaosflux Talk 22:59, 14 July 2020 (UTC)
- While I am still interested in hearing Samsara's thoughts on the matter, the bot is operating within the scope of the relevant BRFA. There is no requirement that I (or seemingly any other BAG that has commented here) know of that requires a bot to keep track of the edit history of a page, and if a page is supposed to be an exception to whatever rule a bot is fixing, {{nobots}} should be used to indicate that - NOT edit warring with the bot or blocking it. That being said, I have unblocked the bot. Primefac (talk) 23:42, 14 July 2020 (UTC)
- Really what should be done is gain consensus that double-redirects are acceptable to begin with at WP:DOUBLEREDIRECT. Because so far, they aren't. Headbomb {t · c · p · b} 00:02, 15 July 2020 (UTC)
- Primefac, I support this action with my BAG hat on. —CYBERPOWER (Message) 02:22, 15 July 2020 (UTC)
Non-free files aren't getting reduced
See User talk:DatGuy/Archives/2020/August#Non-free files aren't getting reduced --Neveselbert (talk · contribs · email) 00:58, 21 July 2020 (UTC)
- @Neveselbert: thanks for the note in case anyone comes looking; as a reminder no editor (or their bot) should ever be expected to make another edit or action -- if one bot is no longer doing this and someone else would like to run one they can apply at BRFA. — xaosflux Talk 01:32, 21 July 2020 (UTC)
FYI re Hasteur
- HasteurBot (t · th · c · del · cross-wiki · SUL · edit counter · pages created (xtools · sigma) · non-automated edits · BLP edits · undos · manual reverts · rollbacks · logs (blocks · rights · moves) · rfar · spi · cci)
Hasteur is recently deceased per WP:AN#User:Hasteur. I don't think that means anything needs to be done but his bots's duties may need to be assumed (one editor is already considering it). --Izno (talk) 23:54, 21 July 2020 (UTC)
- I'll admit, my Python experience isn't the best past the occasional project, but now I've looked at the code of the bots (that I know of), I can interpret it and I'm happy to offer to take up these duties and ensure the bots continue running smoothly and operate normally.
- The larger issue, however, is discussing with the people at WMCS to transfer ownership of the projects which the bots run on to the new maintainer. I'm unsure how that process works, especially as, sadly, there's no way to contact the original operator in this case. Ed6767 talk! 00:11, 22 July 2020 (UTC)
- I'd also be happy to handle it if you want: User:MDanielsBot is all Python and runs on Toolforge. The code is open-source, so it shouldn't be a huge problem if WMCS can't transfer the tools (but they should be able to, per the toolforge wiki). --Mdaniels5757 (talk) 02:58, 22 July 2020 (UTC)
- Mdaniels5757, if you have more experience in Pywikibot, it may be worthwhile for you to maintain them due to your existing experience - and I've found his two toolforge instances: https://admin.toolforge.org/tool/hasteurbot and https://admin.toolforge.org/tool/enbbsb. Hasteurbot already seems to have The Earwig and Theopolisme as listed toolforge maintainers. Ed6767 talk! 10:34, 22 July 2020 (UTC)
- Also happy to help if I can; MajavahBot is also all Python/Pywikibot on Toolforge. According to wikitech:Help:Toolforge/Abandoned_tool_policy the process on Toolforge side should be pretty straightforward. – Majavah talk · edits 11:00, 22 July 2020 (UTC)
- I'll file the request for enbbsb. @The Earwig and Theopolisme: Do either of you wish to take over https://admin.toolforge.org/tool/hasteurbot? If not, would you be willing to add me (mdaniels5757 on Toolforge) as a maintainer to it? Best, --Mdaniels5757 (talk) 15:34, 22 July 2020 (UTC)
- @Mdaniels5757: I don't really have time right now to take on maintenance of a new bot, so I'm glad to hear that you're willing to take it over. I've added you as a maintainer. — Earwig talk 15:39, 22 July 2020 (UTC)
- Was DRN cleck bot hosted on that Toolforge tool too or somewhere else? – Majavah talk · edits 15:53, 22 July 2020 (UTC)
- @Majavah: it was on that toolforge tool.
- BRFA filed at 5 and 6. --Mdaniels5757 (talk) 16:18, 22 July 2020 (UTC)
- Was DRN cleck bot hosted on that Toolforge tool too or somewhere else? – Majavah talk · edits 15:53, 22 July 2020 (UTC)
- @Mdaniels5757: I don't really have time right now to take on maintenance of a new bot, so I'm glad to hear that you're willing to take it over. I've added you as a maintainer. — Earwig talk 15:39, 22 July 2020 (UTC)
- Mdaniels5757, if you have more experience in Pywikibot, it may be worthwhile for you to maintain them due to your existing experience - and I've found his two toolforge instances: https://admin.toolforge.org/tool/hasteurbot and https://admin.toolforge.org/tool/enbbsb. Hasteurbot already seems to have The Earwig and Theopolisme as listed toolforge maintainers. Ed6767 talk! 10:34, 22 July 2020 (UTC)
- I'd also be happy to handle it if you want: User:MDanielsBot is all Python and runs on Toolforge. The code is open-source, so it shouldn't be a huge problem if WMCS can't transfer the tools (but they should be able to, per the toolforge wiki). --Mdaniels5757 (talk) 02:58, 22 July 2020 (UTC)
- I've de-botted that account and marked it deactivated. — xaosflux Talk 14:14, 22 July 2020 (UTC)
- Xaosflux, (more of a general query) would the new maintainer, Majavah, have to submit a new RFBA to get it reactivated? Ed6767 talk! 15:49, 22 July 2020 (UTC)
- The bot account is globally locked. We don't usually change the ownership of an account this way. – Majavah talk · edits 15:52, 22 July 2020 (UTC)
- @Majavah: a new operator would need their own bot account - this should be fairly trivial for a bot operator. — xaosflux Talk 16:36, 22 July 2020 (UTC)
- @Xaosflux: You forgot DRN clerk bot which was also operated by Hasteur. – Majavah talk · edits 15:51, 22 July 2020 (UTC)
- Xaosflux, (more of a general query) would the new maintainer, Majavah, have to submit a new RFBA to get it reactivated? Ed6767 talk! 15:49, 22 July 2020 (UTC)
VPPOL discussion closed: linking by InternetArchiveBot
I just closed a well-attended discussion at WP:VPPOL, "Stop InternetArchiveBot from linking books", with the conclusion that its continued adding of links to Internet Archive is controversial and does not have consensus support. I assume that therefore approval for this bot task should be revoked. DMacks (talk) 16:15, 15 July 2020 (UTC)
- DMacks, and I have to challenge this. The bot was approved based on a proposal to implement the bot. The proposal had unanimous support from the community, and this proposal is linked in the BRFA approving it. By your own words there is no consensus, which does not translate to overturn approval of the bot. —CYBERPOWER (Around) 16:47, 15 July 2020 (UTC)
- Statement from Cyberpower678: I would ask that BAG not rescind the approval of InternetArchiveBot 3. I am confused how the most recent discussion which had no consensus would overturn the original proposal and bot approval which both had unanimous consensus (proposal; BRFA). There were some concerns raised that a pending lawsuit should change our approach; however, the Internet Archive is functioning no differently today than it was when the bot was approved (their National Emergency Library project was shut down on June 16). Though I began this project as an unpaid volunteer, it's open knowledge that Internet Archive started paying me to work on this project due to its scale and complexity. I am currently paid by Internet Archive to improve Wikipedia, and I have been public about that. Since I started working with the Archive we have rescued over 10 million dead links and added hundreds of thousands of links to books. Our editors and readers benefit from these links with limited access to digitized books, as they would from any library. I accept that the implementation of the book-linking bot task has not been perfect, and I would sincerely like to opportunity to improve it. I suspended the bot's book linking task as of June 14 until we can resolve any outstanding issues, but I politely disagree that the bot's approval should be entirely removed. —CYBERPOWER (Chat) 20:00, 15 July 2020 (UTC)
- The statement that "the bot should be stopped" seems inaccurate; I presume DMacks was referring to only the book-linking task without considering that the bot also does other tasks. I certainly wouldn't go any further than stopping only that task based on that discussion.As for whether even that task should be stopped, I'm not sure. The discussion was created by someone with an admitted WP:COI, and he seemed to WP:BLUDGEON the discussion throughout. It would be interesting to have a better RFC, specifically and neutrally addressing all of the issues people had raised, to evaluate whether consensus actually has changed. But coming so soon after the previous discussion, we might wind up with people just re-arguing that instead. Anomie⚔ 21:01, 15 July 2020 (UTC)
- The statement seems accurate to me, having read the WP:VPPOL discussion. If you are dissatisfied with the close "
Although the question here is "should the bot stop", the real idea I see is that it is controversial and that there is no longer consensus that it should run (wider discussion superceding WP:BRFA). Therefore the bot should be stopped but its existing edits can stand and no prejudice against future manual additions or removals of IA links by uninvolved editors.
", then you should challenge it at WP:AN. If you want clarification from DMacks, then you should ask them. This isn't a competent location for deciding either of those issues. @Cyberpower678: I've followed the links given at User:InternetArchiveBot for the BRFAs for the bot and I can't find the approval for the task of replacing Google links with IA links. Can you point me to where the approval was granted (and perhaps fix the links on the user page)? --RexxS (talk) 21:13, 15 July 2020 (UTC)- RexxS, We were only replacing Google Books links in rare cases and for specific reasons: 1) if the existing Google link was dead; 2) if the Archive had a complete a free full page view whereas Google offered only a few-sentences snippet; 3) if a public domain book was available on both websites (using a nonprofit over a for-profit site per WP:AFFILIATE). Any of those behaviors can be changed to meet community approval. —CYBERPOWER (Chat) 21:57, 15 July 2020 (UTC)
- RexxS, we thought those bot behaviors were consistent with the community approval, and with policy. If I assumed too much, I take responsibility for it and will tailor the bot to the community's preference in any/all of those circumstances. —CYBERPOWER (Chat) 22:03, 15 July 2020 (UTC)
- RexxS Please point to anywhere in the discussion where there was any mention of (much less support for) stopping the tasks described at Wikipedia:Bots/Requests for approval/InternetArchiveBot or Wikipedia:Bots/Requests for approval/InternetArchiveBot 2. Anomie⚔ 02:23, 16 July 2020 (UTC)
- Bot approval requires support *for* a task, it doesnt require support to *not* do a task after someone comes along afterwards and tacks on something that isnt on the initial approval. Since the specific issue of the linking in the RFC was not in the approval, and Cyberpower admits those behaviours were assumed could be added, and it turns out there is clearly no consensus for them to be performed, the bot does not have approval to perform the specific tasks that were the subject of the RFC. Only in death does duty end (talk) 08:31, 16 July 2020 (UTC)
the specific tasks that were the subject of the RFC
is exactly my point in the post you replied to. I presumed and DMacks confirmed below that "the bot" was something of a metonymy for "the task". I have no idea why RexxS was promoting an obvious error.But you also seem to be missing the fact that there was significant support for the task in question when it was originally proposed and specifically approved; unfortunately the latter is not linked from User:InternetArchiveBot, as that page seems to have not been updated for the additional approval. Anomie⚔ 13:03, 16 July 2020 (UTC)- @Anomie: Please point to anywhere in the close where there was any mention of excluding those tasks. I hope you can see how nonsensical your request was. Now that DMacks has clarified the close, it should render those sort of non-questions irrelevant. The only obvious error was in not linking the BRFA for the task in question from the bot's page. I thought that was a requirement? If it isn't, it damn well should be. --RexxS (talk) 15:57, 16 July 2020 (UTC)
- I'm going to stop replying to you now on the topic of the lack of knowledge of the technical details causing poor wording in the close. You're welcome to continue to believe any wrong thing you want, but I'm not going to waste my time with your refusal to get the point.As for your other question, WP:BOTREQUIRE does say that the bot's user page should include or link to descriptions of the bot's "task (or tasks)" and various other information that is already included in the BRFAs. It looks like Cyberpower678 has now made that addition for IABot (note the linked page is transcluded into User:InternetArchiveBot). Anomie⚔ 21:10, 16 July 2020 (UTC)
- And I'm going to continue to call you out every time you criticise a good-faith close of an RFC by an uninvolved admin, based on your dislike of the outcome. The question was one of whether links should be made to sites that potentially contained copyright-violating material, and I reject your assertion that an admin needs "knowledge of the technical details" to adjudicate on that. You didn't like the outcome, we get it, but you don't have any consensus to do otherwise than to accept it or ask for clarification of it. --RexxS (talk) 22:05, 16 July 2020 (UTC)
- I'm going to stop replying to you now on the topic of the lack of knowledge of the technical details causing poor wording in the close. You're welcome to continue to believe any wrong thing you want, but I'm not going to waste my time with your refusal to get the point.As for your other question, WP:BOTREQUIRE does say that the bot's user page should include or link to descriptions of the bot's "task (or tasks)" and various other information that is already included in the BRFAs. It looks like Cyberpower678 has now made that addition for IABot (note the linked page is transcluded into User:InternetArchiveBot). Anomie⚔ 21:10, 16 July 2020 (UTC)
- @Anomie: Please point to anywhere in the close where there was any mention of excluding those tasks. I hope you can see how nonsensical your request was. Now that DMacks has clarified the close, it should render those sort of non-questions irrelevant. The only obvious error was in not linking the BRFA for the task in question from the bot's page. I thought that was a requirement? If it isn't, it damn well should be. --RexxS (talk) 15:57, 16 July 2020 (UTC)
- Bot approval requires support *for* a task, it doesnt require support to *not* do a task after someone comes along afterwards and tacks on something that isnt on the initial approval. Since the specific issue of the linking in the RFC was not in the approval, and Cyberpower admits those behaviours were assumed could be added, and it turns out there is clearly no consensus for them to be performed, the bot does not have approval to perform the specific tasks that were the subject of the RFC. Only in death does duty end (talk) 08:31, 16 July 2020 (UTC)
- The statement seems accurate to me, having read the WP:VPPOL discussion. If you are dissatisfied with the close "
- The statement that "the bot should be stopped" seems inaccurate; I presume DMacks was referring to only the book-linking task without considering that the bot also does other tasks. I certainly wouldn't go any further than stopping only that task based on that discussion.As for whether even that task should be stopped, I'm not sure. The discussion was created by someone with an admitted WP:COI, and he seemed to WP:BLUDGEON the discussion throughout. It would be interesting to have a better RFC, specifically and neutrally addressing all of the issues people had raised, to evaluate whether consensus actually has changed. But coming so soon after the previous discussion, we might wind up with people just re-arguing that instead. Anomie⚔ 21:01, 15 July 2020 (UTC)
- I am Mark Graham. I manage the Wayback Machine at the Internet Archive, as well as our efforts to help make Wikipedia sites more useful and reliable by fixing broken links and adding links to various resource available from archive.org.We love working with the global Wikimedia community to collaborate on shared solutions for reliability and verifiability. For the last 5 years InternetArchiveBot has been linking to archived snapshots of web pages that no longer function or will soon not function. Our project to link to digitized versions of books makes Wikipedia's references more accessible for readers. We have no profit motive here. We don't have ads and don't gain revenue from increased traffic. Like Wikipedia, we work to keep our servers up and running, not to make money.There was a concern raised that we somehow profit from Better World Books. Better World Books is owned by Better World Libraries, a nonprofit.We take the wishes of the Wikipedia community very seriously, and we would very much like to continue helping the community with its inspiring mission. Markjgraham hmb (talk) 21:19, 15 July 2020 (UTC)
- (edit conflict) I didn't participate in the original discussion, and, having just read through it, I don't see consensus to overturn the previous consensus and BAG approval. Supporters argued to discontinue linking for two main reasons (1) potential copyvio and (2) moral reasons. The first point was pretty soundly refuted by the opposes and even some supporters (like Masem) so I don't see any consensus that BAG approval should be revoked for copyvio reasons. The other main argument is that it has the potential to harm downstream users and so we should avoid using the links on principle. This was a minority position among the supporters, and even if we assigning them full weight, there still wouldn't be a consensus to overturn the previous consensus. So despite the close, I don't think this request should be acted on without further discussion. Whether that be here or at WP:AN I don't really care, but I don't think this is a rubber stamp request. — Wug·a·po·des 21:36, 15 July 2020 (UTC)
- I am inclined to agree with Anomie and Wugapodes. @Markjgraham hmb: thank you for clarifying comments here, and for your thoughtful support over the years. Wikipedia is certainly better for the thorough links to archived snapshots that have become uniform through such work - a combination of two of the oases that make the Internet not suck. Cyberpower, I appreciate your openness to feedback -- can you share a few examples of replacing G!Books links? I haven't come across this myself but saw it mentioned by a few different people. – SJ + 01:51, 16 July 2020 (UTC)
- Replying as I wrote the GB conversion side of the bot. Google diffs: diff 1, diff 2, diff 3, diff 4. There were also some PD conversions such as diff per WP:AFFILIATE which recommends converting for-profit to non-profit. There are no immediate plans to convert more Google, because there are no more to convert. The conversion mostly finished in March. The conversion rate was less than 10%. -- GreenC 03:10, 16 July 2020 (UTC)
- I've been asked to clarify several issues of my close. First, I'm not a BAG person, so I don't know the technical details of what is a "bot" vs "one bot-task among several that a bot runs". My close is narrowly focused on whatever you call the process(es) add the IA links (de novo or as conversion from some other archive link) because that's the only issue that was raised in the disussion (not a rejection of the bot-operator or whatever else they may do). Second, the original approval discussion that was mentioned in the VPPOL discussion was not well-attended (and although the closer claimed strong support there was some dissent. It's BAG's perogative to decide what's "good enough" when they are the sole site of discussion. But more significantly, the discussion was limited and the closer of the botreq noted that the approval queue was backlogged and the task seemed safe/noncontroversial enough so "why not?". This new discussion was substantially more attended, in a more public place (lots of WP-sitewide folks rather than only those who choose to know about approval of upcoming bot tasks), and demonstrated that the task is definitely not non-controversial. So that's why I closed it as a case of the original WP:CONLIMITED (approved) being replaced by the new no-consensus (==not approved). Another admin pointed me to WP:BOTREQUIRE #4. I did not take the strongest interpretation/"letter of the law" of WP:NOCONSENSUS regarding addition of external links to unwind the previous edits because I did not see that point made strongly in the discussion (it no longer has consensus). DMacks (talk) 04:53, 16 July 2020 (UTC)
- Thanks for the reply DMacks, that makes sense, especially the reference to BOTREQ#4. Would you be able to be a bit more precise about what you think is and is not appropriate given the discussion? Given your close, it seems that IA bot linking to any IA page should not be done, but if it's more narrow than that, it would be helpful to know what kinds of links are appropriate. Given your explanation, I agree replacing Google Books links has no consensus and should be stopped. Does the lack of consensus extend to linking to books that were otherwise unlinked? Some participants distinguished between links to works under copyright and those not, is there no consensus for both of those? It's possible the answer is yes, but either way it would be helpful to have more clarity on exactly what doesn't have consensus given that discussion. — Wug·a·po·des 06:21, 16 July 2020 (UTC)
- @DMacks: You seem to have mischaracterized the discussion at Wikipedia:Village pump (proposals)/Archive 159#Expanding InternetArchiveBot to handle book references, or considered only Wikipedia:Bots/Requests for approval/InternetArchiveBot 3 without following the link from "Links to relevant discussions". I doubt that WP:Village pump (policy) is substantially more public than WP:Village pump (proposals), or that WP:Village pump (proposals) is attended only by those who choose to know about bot tasks. As for the VP discussions themselves, I count 24 userpage links in the original (15 explicit support !votes, 1 neutral, 0 oppose) versus 38 in the new discussion (harder to count as it wasn't conducted as a !vote, but it seems about 14 support, 12 oppose, 1 neutral, with several of those supports being very weak). Anomie⚔ 13:03, 16 July 2020 (UTC)
- It's not my job as uninvolved closer of the discussion to supervote on the value of the goal or target site, just to pull together what is mentioned in the discussion. The current discussion demonstrates that there is not current consensus support for the bot task. It highlighted a weakness in the original approval process last year even though that was based on a consensus-support discussion for the underlying task at that time. This current discussion newly or more-strongly notes some opposing ideas that were not/not-as-strongly previously: not consensus oppose but not consensus support (such as overwhelming rejection of minority opposition ideas). "Consensus support to make edits" seems to be what a bot-task requires. First, there was definite opposition to converting of pre-existing links, so that shouldn't continue. Second, the fact that the new discussion was substantially during the time when IA made the jump into the emergency-COVID-library (NEL) project and then backed down in the face of legal claims made this discussion messy. I wish we would have first had a discussion framed from scratch about suitability of the site and that separately asked about free vs copyrighted works, with ping to WP:ELN, at a time when NEL wasn't happening. I've been squinting at the old and new discussions again for the past hour ot two--not just my take on the new discussion, and not treating the current bot-task as a single entity to confirm/reject). Taken together, I think bot-adding links to works that are out-of-copyright still has consensus (not as strong as it originally did) as long as registration is not required to access the full work being linked. DMacks (talk) 10:57, 17 July 2020 (UTC)
- I agree the close does not reflect the (fact-based) consensus. There's a lot of FUD in that RFC, very little of which is fact-based. IAlinks are not copyright violations, nor do they host copyright violating materials. Headbomb {t · c · p · b} 15:52, 16 July 2020 (UTC)
- I apologize for coming so late into this discussion; after the closure, I had not dropped by the Pump much and had not seen notice that it had begun. I see some strange claims being made here, like "There was a concern raised that we somehow profit from Better World Books. Better World Books is owned by Better World Libraries, a nonprofit." BWL is IA's long-time partner. That's who they're throwing sales links to. The fact that it is a non-profit does not mean that it does not make a profit off of the sales, it's just a matter of what they do with that profit. The claim that IA doesn't host copyright violating materials is at least a highly controversial one, as they are currently being sued by significant publishing players over claims of just such copyright infringement. As for the closure, most of the !votes cast were done so before there was any discussion of the links between Better World Books and IA; I apologize for not including that in the initial statement of concern, but I was unaware of the linkage at the time I launched the discussion. Once I added that information to the discussion on June 17, there was a strong swing to stopping the linking (bolded !votes after that point: 6 support, 2 oppose.) As for the question of whether links to out-of-copyright books should continue, I will note that my original request that the bot "be halted and not allowed to run until it is changed to no longer link under-copyright works" (or until the suit cleared them), but then that was before the promotional concern was raised. As for the existing unanimous discussion, there was only one person who even raised the question of US copyright there, @Masem:, and that user continued to show concerns about copyright implications in the newer discussions. There was no concern raised about the applicability of WP:COPYVIO (which does not require that copyright violation be proven before a link is prevented, but only suspected.) The link to Better World Books was not brought up. And thus, without those factors, the bot change was approved on the basis of "this task is not controversial".
- I will also note that the bot seems to be working in ways that deviate from the original proposal, at least if made with Wikipedian eyes. The proposal said it would make links for "referenced books", which might be read as "books that are made referenced to", but I would suspect that most of us editors would read as "books used as references". It has not been limiting itself to books within REF tags, but has been including books that are mentioned in the body of the article... and as most of us editors know, external links are discouraged within the body of articles. If there is to be a new RFC done, that might want to be rolled into it... or it might be more of a distraction from the WP:COPYVIO and promo-y concerns. --Nat Gertler (talk)
- Let me just add… There is a sea of difference between manually added links, where human judgement and local consensus are involved in the decision, and proactively mass-adding them with a bot. Any RFC about this should take this distinction into account: I absolutely support human editors adding these links (with possibility for discussing the suitability in every individual instance, if there is a copyright concern or a primacy of "best" link issue), and bots and scripts that help humans to do so, but I very much oppose bot addition of them. In addition to the lack of human judgement involved, tasking a bot to do this is not just permitting such links, it is mandating them in all cases on all articles regardless of local consensus. That's quite a different question to be asking the community to support. --Xover (talk) 10:53, 28 July 2020 (UTC)
- Hi Xover. The bot has added 432,000 links to books. The scale of benefit is simply not possible through manual additions alone. As for honoring local consensus, it's really easy to just add
{{bots|deny=InternetArchiveBot}}
to the page. The bot respects that and will never add any links on that page. —CYBERPOWER (Chat) 18:25, 29 July 2020 (UTC)- @Cyberpower678: First off… Am I supposed to be impressed by that number? Human beings have built an encyclopedia with 6,925,369 articles. Even assuming an average of less than one book cited per article, humans have still outperformed your bot by at least one order of magnitude by adding actual citations and not just a simple URL. Just because it is technically possible for a bot to do something does not mean that it should do that thing, nor that the net result is better (more links is not inherently a good thing).Second… Almost all the actual benefit to Wikipedia can be had by making the bot act as an interactive tool that is easily available to human editors when they work on an article. A Gadget (even a default gadget) can detect the addition of a cite to a book whose ISBN is known and offer to automatically add the link, at the user's discretion. Or, like the link fixer or dab solver, can be made easy to run for the human editor, fully or semi-automatically, but at the human being's control. Or it could even work like the dab link notifier, leaving suggested links on the user's talk page when they have added a cite for which a link exists but was not included in the cite. Or like Anomie's orphaned refs detector, that leaves notifications for humans on the article's talk page when the error can't be automatically fixed (I know you have the code for leaving talk page messages…). In any of these scenarios you can leave it up to the user with a simple checkbox whether to add links to books that are not fully available if that matters to them (some care, some don't). Give me a stupidly easy way to add such links and I'll use the heck out of it, and probably recommend it as best practice the way adding WayBack links already is. Or, hey, or you could have the humility to simply list the IA as one lookup service among the others at Special:BookSources, which facility is well used and provides numerous benefits beyond what a single hard-coded URL does. That's how most of our WP:V is done: having direct links to an electronic edition is a convenience, nothing more.Or build a bibliographic database that lets all our citation-assistant tools easily look up what scans IA has for a given work and add that link (or link directly to the high-res single-page .jp2). That would enable all sorts of other unanticipated benefits and innovations. And, hey, by actually collaborating with the Wikimedia Movement, we could, together, build a common bibliographic database based on Wikidata that could be used not just to look up a scan URL on IA, but as a basis for structured citations here, for advanced integrations on Wikisource, and that can be expanded to include HathiTrust and JSTOR (and even Google Books if they ever return to being useful for anything) and similar. And it would enable crowdsourced addition and correction of that bibliographic data, not by IA and Wikimedia cannibalising and competing for each others' volunteers, but by taking advantage of the work that those volunteers already do today and sharing the benefits.The only benefits that cannot be achieved through these means are the benefits that IA derives from having 350k+ backlinks from Wikipedia that they control. I in no way begrudge them that benefit (a few quibbles and recent bonehead moves aside, I'm a big fan of IA), but on Wikipedia the benefit to IA is of secondary importance (not no importance, just secondary to our goals). Once one gets one's head out of the one-way linkspam track there are so many ways the IA and the Wikimedia Movement can work together and have overlapping goals. You still listening Mark?Finally,
{{bots|deny=whateverbot}}
is in practice non-functional because any attempt to add it to an article is immediately removed by bot (well, AWB usually, but same diff) with the reasoning that either the bot is operating within its BRFA and preventing it with{{bots}}
is inappropriate, or the bot is not operating within its BRFA, in which case it must be taken to ANI so the bot can be blocked and adding{{bots}}
to an article is inappropriate. There are other problems with pointing at{{bots|deny=whateverbot}}
as some sort of magical reason why the actual problem with a bot needn't be fixed (for one, it shifts the burden from the bot operator to every single other editor to demonstrate what is controversial and what isn't), but they're mostly moot so long as that particular Catch 22 is allowed to keep operating. --Xover (talk) 18:36, 3 August 2020 (UTC)- I am listening and we completely agree that scale matters. That's why InternetArchiveBot has rescued more than 11,000,000 dead links on Wikipedia. Links that provide value to readers are the only kind of links we're interested in.
- @Cyberpower678: First off… Am I supposed to be impressed by that number? Human beings have built an encyclopedia with 6,925,369 articles. Even assuming an average of less than one book cited per article, humans have still outperformed your bot by at least one order of magnitude by adding actual citations and not just a simple URL. Just because it is technically possible for a bot to do something does not mean that it should do that thing, nor that the net result is better (more links is not inherently a good thing).Second… Almost all the actual benefit to Wikipedia can be had by making the bot act as an interactive tool that is easily available to human editors when they work on an article. A Gadget (even a default gadget) can detect the addition of a cite to a book whose ISBN is known and offer to automatically add the link, at the user's discretion. Or, like the link fixer or dab solver, can be made easy to run for the human editor, fully or semi-automatically, but at the human being's control. Or it could even work like the dab link notifier, leaving suggested links on the user's talk page when they have added a cite for which a link exists but was not included in the cite. Or like Anomie's orphaned refs detector, that leaves notifications for humans on the article's talk page when the error can't be automatically fixed (I know you have the code for leaving talk page messages…). In any of these scenarios you can leave it up to the user with a simple checkbox whether to add links to books that are not fully available if that matters to them (some care, some don't). Give me a stupidly easy way to add such links and I'll use the heck out of it, and probably recommend it as best practice the way adding WayBack links already is. Or, hey, or you could have the humility to simply list the IA as one lookup service among the others at Special:BookSources, which facility is well used and provides numerous benefits beyond what a single hard-coded URL does. That's how most of our WP:V is done: having direct links to an electronic edition is a convenience, nothing more.Or build a bibliographic database that lets all our citation-assistant tools easily look up what scans IA has for a given work and add that link (or link directly to the high-res single-page .jp2). That would enable all sorts of other unanticipated benefits and innovations. And, hey, by actually collaborating with the Wikimedia Movement, we could, together, build a common bibliographic database based on Wikidata that could be used not just to look up a scan URL on IA, but as a basis for structured citations here, for advanced integrations on Wikisource, and that can be expanded to include HathiTrust and JSTOR (and even Google Books if they ever return to being useful for anything) and similar. And it would enable crowdsourced addition and correction of that bibliographic data, not by IA and Wikimedia cannibalising and competing for each others' volunteers, but by taking advantage of the work that those volunteers already do today and sharing the benefits.The only benefits that cannot be achieved through these means are the benefits that IA derives from having 350k+ backlinks from Wikipedia that they control. I in no way begrudge them that benefit (a few quibbles and recent bonehead moves aside, I'm a big fan of IA), but on Wikipedia the benefit to IA is of secondary importance (not no importance, just secondary to our goals). Once one gets one's head out of the one-way linkspam track there are so many ways the IA and the Wikimedia Movement can work together and have overlapping goals. You still listening Mark?Finally,
- Hi Xover. The bot has added 432,000 links to books. The scale of benefit is simply not possible through manual additions alone. As for honoring local consensus, it's really easy to just add
- Let me just add… There is a sea of difference between manually added links, where human judgement and local consensus are involved in the decision, and proactively mass-adding them with a bot. Any RFC about this should take this distinction into account: I absolutely support human editors adding these links (with possibility for discussing the suitability in every individual instance, if there is a copyright concern or a primacy of "best" link issue), and bots and scripts that help humans to do so, but I very much oppose bot addition of them. In addition to the lack of human judgement involved, tasking a bot to do this is not just permitting such links, it is mandating them in all cases on all articles regardless of local consensus. That's quite a different question to be asking the community to support. --Xover (talk) 10:53, 28 July 2020 (UTC)
- While very experienced or knowledgeable editors like you, may work with gadgets or edit interfaces, the majority will not, so manual approaches are less likely to result in benefit at scale. I see your point about multiple paths to a book, and we've hoped that giving readers a direct link to a live version is more useful than a link to an index page with 100 different options.
- We are interested in helping to make the Web more useful and readable. To that end, we are very happy to see the entirety of information we've collected integrated into Wikidata; helping to make Wikidata a more comprehensive bibliographic information source is one of our main goals. We are a service-provider to the internet. We preserve webpages. We digitize books we own. We make our collections available to every person for free.
- We have been collaborating with the Wikimedia movement for more than 8 years, since we started archiving every externally referenced URL added on all of the Wikipedia language editions, and then fixing 404 errors across 33 language versions of Wikipedia. The Internet Archive was a founding participant in Wikicite, the effort to create that very shared open bibliographic database we both dream of. That’s why we have been collaborating with Wikimedia for many years--with the community, with The Wikipedia Library, with the Community Programs team at the Foundation, with Wikicite, with our presence at events like Wikimania and Wikiconference North America, and with public collaboration between our nonprofits. We are on the side of Wikipedia readers and always will be. Markjgraham hmb (talk) 15:43, 4 August 2020 (UTC)
- If someone is AWBing away {{bots}}, you should probably ask them to, y'know, not do that. It might help if we timestamped {{bots}} like other maintenance tags though. --AntiCompositeNumber (talk) 19:11, 3 August 2020 (UTC)
- There is also
{{cbignore}}
for fine tuning per cite. -- GreenC 20:27, 29 July 2020 (UTC)
As an editor, I am concerned to think my work is being messed with by an unthinking bot. I did not read whatever that bot is now linking my citation as, and the bot did not read whatever they are now saying my citation is. When there comes a time that the bot is actually able to read books, study what is in books, write content based on what it actually read, and make citations based on what it actually reads, then maybe. -- Alanscottwalker (talk) 21:52, 29 July 2020 (UTC)
- The bot matches the edition cited. For example in Albert Stubblebine citation #2 for Men Who Stare at Goats.. there are two editions available at Internet Archive, but not the correct edition (Simon and Schuster 2004), so it was not linked. Compare with Spitsbergen where the correct edition and page is available. -- GreenC 16:36, 30 July 2020 (UTC)
Some PyWikiBot trouble with flagging bot edits
See Wikipedia:Village_pump_(technical)#PyWikiBot_(or_Muninnbot/Toolforge)_not_flagging_bot_edits. (TL;DR: Munninbot is not flagging its edits correctly, and I see nothing in the code that is off, so either I missed something or there is a bug in PWB.) TigraanClick here to contact me 14:08, 14 August 2020 (UTC)
BRFA backlog
{{Section resolved|1=~~~~}}
instead.FYI, WP:BRFA has a decent backlog right now (and I'm not just saying that because I'm on it twice). Any BAG member willing to give old requests some attention will recieve my appreciation and a barnstar :).
Cc. a few active BAG members (no offense to the ones I'm not pinging): @Primefac, Xaosflux, and Enterprisey.
Best, --Mdaniels5757 (talk) 21:54, 2 August 2020 (UTC)
- Jeez, you take a semi-wikibreak for a while and everyone gets impatient... looks better now. Primefac (talk) 22:27, 2 August 2020 (UTC)
- @Primefac: Thank you! --Mdaniels5757 (talk) 02:51, 3 August 2020 (UTC)
Gentle note that maybe this is not quite resolved if one person taking a break (or having a possible COI with a specific BRFA, to refer to my one) makes the entire system grind to a halt. We probably need more BAG? And looking at the member list, I think a couple miss the activity requirements currently, and more will in the next month or two. ProcrastinatingReader (talk) 13:20, 30 August 2020 (UTC)
Proposal - BAG inactivity tracking
It was mentioned a few days ago that some of the BAGs are reaching the inactivity limits set forth in the 2018 inactivity RFC. Out of curiosity, I started glancing through some of the contribs of the members and realized that there are a lot of places to check (keep in mind that BAG activity is based on bot-related activity, not just overall activity like with admins)! I mentioned the issue to JJMC89, as they run the bot that keeps track of admin activity, and they said it would be fine assuming we could get specific criteria/pages nailed down. So there's one proposal and a follow-up:
Should we have a bot track BAG activity, in as much detail as possible, in order to assist in maintaining the member list?
I ask the question for two main reasons: the RFC itself was just a hair over "no consensus", and second I feel like whatever pages we give the bot to check activity on, there will always be outliers. In other words, I feel like the bot would get us 95% of the way there, with only a final check to make sure nothing non-obvious wasn't missed (quadruple negative?). At the very least, though, it would give us an indication of who might be approaching the limit (if only to give them a heads-up about it).
The follow-up, if this proposal goes forward, is which pages should we include in the automatic tracking (I mean, anything in {{botnav}} is pretty obvious, but things start getting weird as you go further down the Category:Wikipedia bots tree). Primefac (talk) 17:42, 1 September 2020 (UTC)
- You'd need some way to link BAG members to their bot accounts, too. Any messages placed on their bot's user or talk page (incl subpages) should count as being active. Some messages on their personal user talk page could be bot-related, too, but it's not exactly possible to determine this case automatically. ProcrastinatingReader (talk) 20:59, 1 September 2020 (UTC)
- True, though to be honest I'd rather be more inclusive than not; it's not the end of the world if we accidentally leave someone on the rolls because a bot said they made a bot-related edit when it really wasn't. Primefac (talk) 21:07, 1 September 2020 (UTC)
- (edit conflict) Similarly, any contribs by the bot likely counts as activity too. So, in short, I think many cases would be addressed by some kind of list that maps a BAG to all their bot accounts, then checks if that bot has made any contribs, or the operator has posted on the bot's user or talk pages. This should flag as being active. Then the case you mentioned, edits to any of the {{botnav}} pages, talks, or any subpages of previous 2. I think this probably catches most instances of "bot-related activity"? ProcrastinatingReader (talk) 21:12, 1 September 2020 (UTC)
- I'd argue that point, there have been a few instances where a bot is still running but the operator has gone inactive. No one really noticed until the bot started malfunctioning. Primefac (talk) 21:26, 1 September 2020 (UTC)
- Hmm, I was basing that off this (in version D):
After two years without any bot-related activity (such as posting on bot-related pages, posting on a bot's talk page, or operating a bot)
. I guess it depends if you class leaving a bot running unsupervised while you're inactive as "operating a bot". ProcrastinatingReader (talk) 21:42, 1 September 2020 (UTC)- From the back-and-forth I've seen from bot operators who are even slow to respond to concerns, yeah, I'd say that a botop who isn't active isn't actually running the bot. Primefac (talk) 19:24, 5 September 2020 (UTC)
- Hmm, I was basing that off this (in version D):
- I'd argue that point, there have been a few instances where a bot is still running but the operator has gone inactive. No one really noticed until the bot started malfunctioning. Primefac (talk) 21:26, 1 September 2020 (UTC)
- To add, actually, a far easier way to do it might be to check if they've posted to any bot page. After all, posting to someone else's bot's talk is to count as activity, too, and checking this doesn't involve creating a map. User pages in Category:All Wikipedia bots seems to do that. Just an initial 2c. ProcrastinatingReader (talk) 21:26, 1 September 2020 (UTC)
- (edit conflict) Similarly, any contribs by the bot likely counts as activity too. So, in short, I think many cases would be addressed by some kind of list that maps a BAG to all their bot accounts, then checks if that bot has made any contribs, or the operator has posted on the bot's user or talk pages. This should flag as being active. Then the case you mentioned, edits to any of the {{botnav}} pages, talks, or any subpages of previous 2. I think this probably catches most instances of "bot-related activity"? ProcrastinatingReader (talk) 21:12, 1 September 2020 (UTC)
- True, though to be honest I'd rather be more inclusive than not; it's not the end of the world if we accidentally leave someone on the rolls because a bot said they made a bot-related edit when it really wasn't. Primefac (talk) 21:07, 1 September 2020 (UTC)
Teahouse ping list
Hello everyone! I am compiling a list of editors willing to be pinged to the Teahouse for when better expertise would be more helpful. It is currently being drafted at User:Usedtobecool/Tea. Interested volunteers are requested to add themselves. I would like to assure that bot-related queries are one of the rarer topics in the Teahouse. Of course, on occasions that they do come up, they are almost always best answered by someone with active experience in the area. I would also assure that being listed is more a fellowship of the ring than an unbreakable vow. Thanks in advance, and best regards, Usedtobecool ☎️ 11:18, 10 September 2020 (UTC)
possible unapproved bot
Special:Contributions/Droid I am making mass edits to books changing things to "listed the book as" form very mechanically. The highly bot like editing rate, lack of edit summary as soon as they made 10 edits to get auto confirmed suggests it might be an unapproved bot. Graywalls (talk) 02:29, 10 September 2020 (UTC)
- Huh. Does BKFIP ever log in? --Izno (talk) 03:03, 10 September 2020 (UTC)
- @Graywalls: the edit rate isn't high enough to think this is a bot using the wrong account - if this user is making bad repeat edits and not responding to talk messages, you can list at WP:ANI for review. — xaosflux Talk 11:44, 10 September 2020 (UTC)
- Yeah, this looks more like a find/replace search, not really high enough speed to indicate a bot or hacked script like some users have done in the past. ANI would be the way to go if it really is a disruption. Primefac (talk) 12:01, 10 September 2020 (UTC)
Double redirect vandalism discussion at AN
Bot operators and BAGs experienced with double redirects may be interested in this discussion at AN: Wikipedia:Administrators'_noticeboard#Double-redirect_fixing_bots:_a_cautionary_tale (perma)
I think it requires specialist knowledge to resolve, if this is even deemed to be a problem in the first place. No double redirect bots log, afaik, and even if they did wiki pages aren't suited for logs of this nature I think. Not sure if it's even easy to revert such changes, as the targets would change with the fix. I think it's somewhat problematic, but I don't have enough experience with double redirects to think of clear and effective solutions myself. Any thoughts / comments? ProcrastinatingReader (talk) 18:02, 11 September 2020 (UTC)
- Jeez, first we get complaints that redirect fixers aren't fixing double redirs fast enough, and now we're getting complaints that redirect fixers are fixing double redirs too fast. Primefac (talk) 18:20, 11 September 2020 (UTC) And yes, this is humour, but I'm not really sure how we solve all of the issues associated with reckless vandalism.
- It could be useful if {{R avoided double redirect}} were used. That allows the redirect to be flagged with the intended target, and if the intended target winds up being retargeted or un-redirected it'll add a maintenance category. Double-redirect-fixing bots could probably safely apply this (if not already present) when they fix a double redirect. Anomie⚔ 19:30, 11 September 2020 (UTC)
- Wikipedia:Double redirects are usually created after a page move, but in this case the redirect replaced non-redirecting content (which could indicate a cut-and-paste move). These days such changes trigger an edit filter: (Tag: New redirect). Rather than mindlessly fix double-redirects caused by edits triggering this filter, the bots should detect this situation and flag it for administrator attention. It could mean a cut-and-paste move needs tended to. Or it could mean the person was deemed to not be notable and their biography was "merged" to another page by "blank and redirect", in which case the double redirects should be fixed. – wbm1058 (talk) 19:53, 12 September 2020 (UTC)
DYK Twitter bot
Please participate in the discussion at Wikipedia_talk:Did_you_know#DYK_Twitter_bot. KCVelaga (talk) 11:43, 15 September 2020 (UTC)
Centralising discussion
I don't even know where the correct place to post this is.
We have 7 meta discussion venues relating to bots. Namely, WT:Bots, WT:Bot policy, WT:Bot Approvals Group, WP:Bots/Noticeboard, WT:Bots/Noticeboard, WT:Bots/Requests for approval, WT:Bot requests. Would it be appropriate to centralise discussion for some of these? Perhaps centralising WT:Bot policy, WT:Bot Approvals Group, WT:Bots/Noticeboard, and WT:Bots/Requests for approval; probably to WT:Bot Approvals Group, as all involve closely related meta-discussions relating to bot policy or bot operations and hence under close purview of the BAG. WT:Bots is an info page and unrelated so keep that, and of course keep BOTN for non-meta discussions (ie those relating directly to a bot).
Currently discussions seem a bit all over the place. Wikipedia_talk:Bots/Requests_for_approval#QEDKbot was created over there, the template for that bot's approval suggests using WT:BRFA, although the bot policy (specifically WP:BOTAPPEAL) suggests using WP:BOTN. Since it isn't a metadiscussion, seems like WP:BOTN would make most sense? ProcrastinatingReader (talk) 13:10, 15 September 2020 (UTC)
- We had this issue with WP:AFC a few years ago, and the end result was that everything got smoshed together into the centralized WT:AFC. Given how infrequently each individual bot-related talk page is edited, on the whole I see no downside to having all bot-related talk pages (with the exception of WT:BAG, which is its own thing) redirected here. Primefac (talk) 13:28, 15 September 2020 (UTC)
- (edit conflict) I wouldn't redirect WT:Bot policy, it's probably best for the policy to continue being discussed on its own talk page.
- WT:Bot Approvals Group is mainly used for applications and inactivity discussions. I suppose those purposes could be moved here to WP:BOTN, although that might confuse the application process somewhat.
- WT:Bots/Requests for approval is for discussing the BRFA page and process itself. I suppose it could be redirected here to WP:BOTN, but that would need to be done carefully to avoid confusing all the history it has. It was historically used for officially reviewing approvals, but that was changed in 2017 after brief discussion and apparently the templates weren't updated, which is the source of the discrepancy in instructions you noted. Even when they are updated, the substed copies on all the old approvals won't automatically be changed to match.
- WT:Bots/Noticeboard is for discussing the noticeboard, should that ever really matter. Like the various Village pump talk pages, it shouldn't be used often but when it is it might be confusing to have it redirected to the noticeboard itself. Anomie⚔ 13:38, 15 September 2020 (UTC)
- Just for clarity, I was suggesting centralising meta-discussions to a single WT page, rather than WP:BOTN, and keeping the non-meta discussions (ie, relating to an active bot) at WP:BOTN. ProcrastinatingReader (talk) 13:48, 15 September 2020 (UTC)
The question mostly is, what are you trying to discuss?
- WP:BOTN should be for general bot-related discussion. This should be where people are coming for 90%+ of discussions. When in doubt, this is the place to start.
- WT:BOTN should be to discuss improvements to WP:BOTN specifically. This should be mostly unused.
- WT:BOTS should be to discuss improvements to the overview page WP:BOTS specifically. This should be mostly unused.
- WT:BOTREQ should be to discuss improvements to the overview page WP:BOTREQ specifically. This should be mostly unused.
- WT:BAG should be used to discuss improvements to the overview page WP:BAG specifically, or BAG-specific issues. This should be mostly unused.
- WT:BOTPOL should be to discuss bot policy and improvements to WP:BOTPOL specifically. This should be used for BOTPOL-related clarifications and policy change discussions.
- WT:BRFA should be used to discuss the BRFA process in general, or the specific improvements to the WP:BRFA page. This should be mostly unused.
So nearly all of these pages should have a notice similar to WT:BOTS, if they don't already. Headbomb {t · c · p · b} 13:32, 15 September 2020 (UTC)
- To be clear, I'm not against some centralization. Just explaining the current status. Headbomb {t · c · p · b} 13:36, 15 September 2020 (UTC)
- I think the point being made is that while you are absolutely spot-on with your assessment, and basically this page should be the only one that sees major traffic, that's not the case; people think "bot" and go to WT:BOTS, or think "BRFA issue" and go to WT:BRFA. I saw (and approve) of your recent change to WT:BOTS, but even after everything else is archived it's not an incredibly obvious note. Of course, WT:IRCHELP has a HUGE red banner and no one reads that either. Primefac (talk) 13:39, 15 September 2020 (UTC)
- That makes sense, but I think it might be misunderstood currently (e.g. recently: discussion you moved above, this particular discussion which is a meta-discussion but fits into none of those WTs, and the QEDKbot discussion at WT:BRFA).
Speaking of WT:BRFA, the BRFA close template should probably be updated to suggest discussions be opened at WP:BOTN instead, per both the notice at WT:BRFA and WP:BOTPOL, that particular comment seems like a mistake in the template?Just saw Anomie's response above. ProcrastinatingReader (talk) 13:43, 15 September 2020 (UTC)
Citation bot
Citation bot (BRFA · contribs · actions log · block log · flag log · user rights)
Previous BRFAs:
Citation Bot is in an interesting situation. It is a widely-used and useful bot, but it has one of the longest block logs for any recently-operating bot on Wikipedia. The current listed operator is Smith609, with Kaldari and AManWithNoPlan as additional maintainers. The bot is currently blocked by RexxS for Disruptive editing still removing links after request to stop.
AManWithNoPlan opened an unblock request, which was later closed by Boing! said Zebedee, citing concerns that the bot was operating outside of its approval. There are several editors in the ensuing discussion that also feel the bot was operating outside of approval. Complicating the situation is the fact that Citation bot has had 9 BRFAs, three of which were under DOI bot. The last BRFA was in 2011, but the behavior of Citation bot has changed since then. Because of the significant confusion surrounding Citation bot's approved tasks, I am requesting that the BAG invalidate all previous approvals for Citation bot and require that a new BRFA be filed per WP:BOTAPPEAL. While this is a fairly drastic measure, I think a clear enumeration of the bot's tasks and their approval is the only way to prevent further problems. I also believe that Citation bot should continue operating and sincerely hope that it will. --AntiCompositeNumber (talk) 19:41, 27 June 2020 (UTC)
- Commenting in my personal capacity, that one editor (RexxS) with an axe to grind refuses to recognize Wikipedia:Bots/Requests for approval/DOI bot 2 (task 1 here, generalized from DOIs to other identifiers per consensus) in is both valid and reflects consensus is not grounds to vacate its previous BRFAs. As for its tasks and details, they are already detailed here. Headbomb {t · c · p · b} 20:09, 27 June 2020 (UTC)
- "after request to stop", I think less than 7 hours barely counts as "after", but that's just my opinion. I appreciate not being called an "operator": thank you. AManWithNoPlan (talk) 21:05, 27 June 2020 (UTC)
- i think that the block log needs to be carefully examined. The bot used to have no tests. now it has a huge test suite with code coverage at almost 100%. we even have tests that pull in the CS1/CS2 parameter lists and verify they have not changed. So, many of the blocks are for long gone bugs. The bot is valuable enough that people endured some horrible GIGO bugs - now headbomb complains that the bot failed on a page and I fix the page and not the bot. another block was for someone weaponizing the bot - that was crazy - so now we verify users before letting them run it. so, the only blocks that count are the "i hate what you are doing" blocks. There was a block for asding citeseerx IDs to articles (which is ironic since part of the current block is annoyance at the bot removing copyright violating URLs. AManWithNoPlan (talk) 21:23, 27 June 2020 (UTC)
"I appreciate not being called an "operator": thank you"
- so how would you describe yourself then? A bot runner? a bot maintainer?"now we verify users before letting them run it"
- who are the other members of "we"? Do you still maintain that editors who run the bot have no responsibility for the actions of the bot? Who does then? --RexxS (talk) 21:42, 27 June 2020 (UTC)- Good question. I accidently personified the bot. "we verify" should be "the bot verifies". WMF and I worked a lot on that task, so it is easy for me to feel part of it. I fix bug in the bot and work hard to find bugs. So, I would say I am a software engineer and on very rare occasions a bot runner. "operator " is a highly technical term, so that is smith and only smith. AManWithNoPlan (talk) 00:36, 28 June 2020 (UTC)
- That's a fair answer. I must say though that I'm a little bit concerned about letting the bot decide who can run it. Is that in the list of its functions? As someone who has been writing computer programs for over 50 years now, and a former systems analyst, I'm fairly familiar with highly technical terms, but I don't think I've ever seen "operator" used in that way anywhere else. Normally I would consider that a person who assembles a list of tasks and then sets a system running to perform those tasks as the "operator" of that system, but I accept your idiosyncratic use of the word, and I'll do my best to refer to you as the bot runner in future. --RexxS (talk) 01:35, 28 June 2020 (UTC)
- Good question. I accidently personified the bot. "we verify" should be "the bot verifies". WMF and I worked a lot on that task, so it is easy for me to feel part of it. I fix bug in the bot and work hard to find bugs. So, I would say I am a software engineer and on very rare occasions a bot runner. "operator " is a highly technical term, so that is smith and only smith. AManWithNoPlan (talk) 00:36, 28 June 2020 (UTC)
- i think that the block log needs to be carefully examined. The bot used to have no tests. now it has a huge test suite with code coverage at almost 100%. we even have tests that pull in the CS1/CS2 parameter lists and verify they have not changed. So, many of the blocks are for long gone bugs. The bot is valuable enough that people endured some horrible GIGO bugs - now headbomb complains that the bot failed on a page and I fix the page and not the bot. another block was for someone weaponizing the bot - that was crazy - so now we verify users before letting them run it. so, the only blocks that count are the "i hate what you are doing" blocks. There was a block for asding citeseerx IDs to articles (which is ironic since part of the current block is annoyance at the bot removing copyright violating URLs. AManWithNoPlan (talk) 21:23, 27 June 2020 (UTC)
- @Headbomb: I don't think that trying to peddle a lie about "one editor with an axe to grind" is going to excuse your behaviour. There are numerous editors who have told you the bot is operating outside its approval by removing links from the citation title, such as SandyGeorgia, Nemo bis, Levivich, Nick Thorne. The real problem, however, is that you have unilaterally decided that it's okay to remove links from citation titles whenever some identifier has a link; whereas there is a consensus expressed at Wikipedia:Village pump (proposals)/Archive 167 #Auto-linking titles in citations of works with free-to-read DOIs by over a dozen editors in favour of having links in the citation title regardless of what other identifiers may be linked. The consensus is strongest for retaining citation title links where they point to free-to-read full text, but there is still considerable support for having a link from the title even when the full text is not available without subscription.
- You have decided that your views on those issues will take precedence and you have initiated bot runs to enforce your decision en masse, creating a fait accompli. This is against a background of editors such as MCB taking a complaint to Wikipedia:Administrators' noticeboard/Archive143 #DOI bot blocked for policy reconsideration "because it is implementing a major policy change in the way Wikipedia makes web references, without large-scale community consensus and buy-in." At that point, Fullstop stated "While I don't mind a bot adding document identifiers, this is the only function for which DOI bot was approved, and the bot is going beyond that mandate. By not restricting itself to only adding document identifiers (and doing so without barfing), it has revoked its approval."
- You have also heard SandyGeorgia and HJ Mitchell tell you that the people running the bot are not responsive to concerns expressed by ordinary editors, especially those without considerable technical background.
- You have assumed that you can simply generalise from DOIs to other identifiers at will, and claiming that the documentation at Template:Cite journal represents community consensus merely demonstrates how out-of-touch with ordinary editors you are.
- There is incontrovertible consensus that the bot should not remove links to free-to-read full text from the citation title. There must be a solid guarantee that the bot will not do that.
- There is less strong consensus that the bot should not remove links from the citation title to sources even if they are not free-to-read full text. Editors should be able to make a decision on that when editing an article without fear that a bot will overrule their decision. The bot must not substitute its judgement for that of article editors.
- At present, I believe the bot's functionality will breach both of those considerations. If the operators want to have it operate with that functionality, they should submit their request to community scrutiny and abide by the decision.
- I oppose granting authorisation to the bot for these User:Citation bot #Function proposed functions:
- 1 - This would unlink citation titles as described above.
- 3 - Removing so-called "redundant" parameters gives far too much leeway for abuse. A lack of definition of what is intended by "redundant" parameters gives rise to the present problems.
- 8 - standardising to the dominant cite format breaches WP:CITEVAR, and allows gaming.
- --RexxS (talk) 21:36, 27 June 2020 (UTC)
- @Headbomb: I noticed that you cited Wikipedia:Bots/Requests for approval/DOI bot 2 for approval to remove free URLs. However, there Smith609 said that "the one URL manipulation deemed okay" was replacing "'url=http://dx.doi.org/#' with 'doi=#'". Is there any subsequent consensus allowing any other "URL manipulation[s]", or is that still the only one "deemed okay"? If there is a consensus to allow another "URL manipulation", where is that discussion? Best, --Mdaniels5757 (talk) 21:44, 27 June 2020 (UTC)
- "The only url manipulation" is in the context of DOI bot mangling non-identifier URL based citations like this back in May 2008. It's not a general proscription against doing other identifier based cleanup in line with template documentation. Headbomb {t · c · p · b} 21:56, 27 June 2020 (UTC)
- RexxS, there must have been some misunderstanding. I never stated that the bot operated outside its remit, only that I personally don't like the removal of pdfs.semanticscholar.org links to move them to an identifier. For that, I "blame" consensus at Help talk:Citation Style 1, which was gained to appease a single admin. For me this case is rather an example of the bot developers being too responsive to the requests of minorities of users, with the result that sometimes the overall operation of the bot appears less coherent. Nemo 06:51, 28 June 2020 (UTC)
- @Headbomb: OK. Where is the discussion showing approval to do "other identifier based cleanup in line with template documentation"? Mdaniels5757 (talk) 18:10, 1 July 2020 (UTC)
- This sort of ping should not be done, it is selectively canvassing certain users to a noticeboard. (t · c) buidhe 13:58, 4 July 2020 (UTC)
- @Buidhe: When another editor makes a false claim about me: "one editor with an axe to grind", I am perfectly entitled to refute that lie by notifying the numerous other editors who who have raised complaints. This notice-board is unlikely to be on most editors' watchlists and you should not be afraid of a broader spread of editors becoming aware of what is going on here. It is precisely because decisions are made here in a walled-garden environment that the bot's functionality has been allowed to creep beyond its remit over time. It's time to return it to doing what it has consensus and approval for. --RexxS (talk) 17:20, 4 July 2020 (UTC)
- The correct way to do it is notify all editors who participated in a discussion, not just those who have expressed viewpoints sympathetic to yours. (t · c) buidhe 17:23, 4 July 2020 (UTC)
- Nonsense. It's nothing to do with my viewpoint. It's simply a matter of demonstrating that many other editors have raised issues with this bot, and you know that. --RexxS (talk) 17:47, 4 July 2020 (UTC)
- Other editors raised issues with S2CID conversions while mostly unaware that S2CID links were often problematic to begin with. S2CID (when redundant, but nonetheless free and with no copyright issues) conversions were halted while CS1/2 templates are updated to support
|
. Not url-redundancy elimination in general. Headbomb {t · c · p · b} 17:52, 4 July 2020 (UTC)- Let the other editors speak for themselves. You're well out of order projecting your perspective onto theirs, when you've demonstrated a profound inability to comprehend the nature of complaints raised. The problems that you brought on by extending unilaterally your approval to include Semantic Scholar links was just the trigger that highlighted the bot's removal of a huge number of links from the citation title, where there is obvious consensus for them to be retained. The useful part was the identification of copyvios, but that represents only a small fraction of the total of title links removed by this bot against consensus and without approval. --RexxS (talk) 19:07, 4 July 2020 (UTC)
- You're the one abusing your position here. And the S2CID issue has been resolved for weeks now. Headbomb {t · c · p · b} 20:39, 4 July 2020 (UTC)
- The only person abusing their position here is you. You've granted yourself permission to run the bot in order to remove links from citation titles with no reference to the community's express wish that it should not do so. --RexxS (talk) 20:49, 4 July 2020 (UTC)
- I haven't granted myself shit. Headbomb {t · c · p · b} 20:53, 4 July 2020 (UTC)
- Yeah, you have. You've steadily extended the remit of this bot from a single case of replacing the url parameter with a doi one to the situation where you think you can use it to remove the title link when any of an unspecified number of identifiers exist. There's no approval for that, and there's no consensus for that. It relies purely on your implicit endorsement as a BAG member. --RexxS (talk) 21:09, 4 July 2020 (UTC)
- I haven't granted myself shit. Headbomb {t · c · p · b} 20:53, 4 July 2020 (UTC)
- The only person abusing their position here is you. You've granted yourself permission to run the bot in order to remove links from citation titles with no reference to the community's express wish that it should not do so. --RexxS (talk) 20:49, 4 July 2020 (UTC)
- You're the one abusing your position here. And the S2CID issue has been resolved for weeks now. Headbomb {t · c · p · b} 20:39, 4 July 2020 (UTC)
- Let the other editors speak for themselves. You're well out of order projecting your perspective onto theirs, when you've demonstrated a profound inability to comprehend the nature of complaints raised. The problems that you brought on by extending unilaterally your approval to include Semantic Scholar links was just the trigger that highlighted the bot's removal of a huge number of links from the citation title, where there is obvious consensus for them to be retained. The useful part was the identification of copyvios, but that represents only a small fraction of the total of title links removed by this bot against consensus and without approval. --RexxS (talk) 19:07, 4 July 2020 (UTC)
- Other editors raised issues with S2CID conversions while mostly unaware that S2CID links were often problematic to begin with. S2CID (when redundant, but nonetheless free and with no copyright issues) conversions were halted while CS1/2 templates are updated to support
- Nonsense. It's nothing to do with my viewpoint. It's simply a matter of demonstrating that many other editors have raised issues with this bot, and you know that. --RexxS (talk) 17:47, 4 July 2020 (UTC)
- The correct way to do it is notify all editors who participated in a discussion, not just those who have expressed viewpoints sympathetic to yours. (t · c) buidhe 17:23, 4 July 2020 (UTC)
- @Buidhe: When another editor makes a false claim about me: "one editor with an axe to grind", I am perfectly entitled to refute that lie by notifying the numerous other editors who who have raised complaints. This notice-board is unlikely to be on most editors' watchlists and you should not be afraid of a broader spread of editors becoming aware of what is going on here. It is precisely because decisions are made here in a walled-garden environment that the bot's functionality has been allowed to creep beyond its remit over time. It's time to return it to doing what it has consensus and approval for. --RexxS (talk) 17:20, 4 July 2020 (UTC)
- "after request to stop", I think less than 7 hours barely counts as "after", but that's just my opinion. I appreciate not being called an "operator": thank you. AManWithNoPlan (talk) 21:05, 27 June 2020 (UTC)
- I don’t frequent this board, but was pinged, and AntiCompositeNumber’s proposal seems to be the way to address the problem spots I have encountered ... I hope this can be addressed without personalization, and that the bot can continue running once the wrinkles are resolved. SandyGeorgia (Talk) 00:17, 28 June 2020 (UTC)
- There seems to be broad consensus on the objective of linking open access copies from {{cite journal}} and making them as easy as possible to retrieve. However, to actually to do is hard work: here lies the value of Citation bot, which has helped countless users and edits in performing an otherwise very tedious task, even though it's impossible to make a mistake here and there. I think there are two problems with the proposal to have a new BRFA now. 1) The current incident arises from a time mismatch: the citation bot was immediately adapted to new decisions on the citation templates, while the citation templates themselves are updated only at intervals of several months. The change which would fix this entire problem has been sitting at Module:Citation/CS1/sandbox for over a month despite being uncontroversial. Doing a BRFA now would be unproductive: if anything we need to wait for the templates to have stabilised and for the recent RfC to be fully implemented. 2) There is some confusion about the bot supposedly changing. My understanding (even though I was not following it in 2011) is that the bot has always been doing the same thing, but the templates have changed. Unless we want to establish a standard that a bot operating on templates needs a new BRFA every time those templates are changed by consensus, it's hard to understand what benefit there would be in a new BRFA. If the new BRFA were to decide that the bot can keep doing what it has been doing, and adapt to changing templates and consensus about their usage, we'd be back to the current situation. But if it doesn't, we'd have contradictory decisions: a consensus to do something but a decision to make it impossible to actually do it. Therefore, the best solution is generally to get a new, "better" or more precise consensus on what needs to be done about specific template parameters, in the appropriate venues. Nemo 06:51, 28 June 2020 (UTC)
- As I said on Citation bot's page, I think that Citation bot needs to be back up and running soon. Tying it up in bureaucratic morass isn't doing us any good. I suggested that an old version of the bot be restored, if that is possible, one that doesn't implement the disputed functionality. Then the bot can be actually useful, while we iron out the kinks and authorization behind the scenes. @AManWithNoPlan: is that feasible, and would you be amenable to that? CaptainEek Edits Ho Cap'n!⚓ 19:22, 30 June 2020 (UTC)
- The disputes is should the bot remove S2 URLs when adding S2CID parameters. I modified the bot ao that these would only be removed if there is a PMC present - which makes sure that the title atays linked - or if linking to the URL violates the wikipedia copyvio policy - the S2CID link would still exist, but the primacy of the bad link would be removed. I am surprised that part two got push back. Part one is guaranteed to be a free link, unlike S2 URLs. It is not hard to turn off the code for removing S2 URLs. — Preceding unsigned comment added by AManWithNoPlan (talk • contribs) 19:31, 30 June 2020 (UTC)
- No, the dispute is that the bot removes links from citation titles. Period. It has no approval or consensus to do that (unless the link points to a copyright violation, which no-one is arguing about). --RexxS (talk) 23:31, 30 June 2020 (UTC)
- It does, specifically in Wikipedia:Bots/Requests for approval/DOI bot 2 since 2008. As do other bots such as Wikipedia:Bots/Requests for approval/CitationCleanerBot. This is in line with template documentation, e.g. Template:Cite journal#Identifiers. Headbomb {t · c · p · b} 16:16, 1 July 2020 (UTC)
- DOI bot 2 only covers DOI. CitationCleanerBot 1 only covers JSTOR and was 9 years ago. The template instructions are not global consensus, and they still don't say to remove the URL, in fact, the template instructions say "The |url= parameter or title link can then be used for its prime purpose of providing a convenience link to an open access copy (as in, at least accessible to everyone for free) which would not otherwise be obviously accessible". Levivich [dubious – discuss] 17:46, 1 July 2020 (UTC)
- It does, specifically in Wikipedia:Bots/Requests for approval/DOI bot 2 since 2008. As do other bots such as Wikipedia:Bots/Requests for approval/CitationCleanerBot. This is in line with template documentation, e.g. Template:Cite journal#Identifiers. Headbomb {t · c · p · b} 16:16, 1 July 2020 (UTC)
- No, the dispute is that the bot removes links from citation titles. Period. It has no approval or consensus to do that (unless the link points to a copyright violation, which no-one is arguing about). --RexxS (talk) 23:31, 30 June 2020 (UTC)
- The disputes is should the bot remove S2 URLs when adding S2CID parameters. I modified the bot ao that these would only be removed if there is a PMC present - which makes sure that the title atays linked - or if linking to the URL violates the wikipedia copyvio policy - the S2CID link would still exist, but the primacy of the bad link would be removed. I am surprised that part two got push back. Part one is guaranteed to be a free link, unlike S2 URLs. It is not hard to turn off the code for removing S2 URLs. — Preceding unsigned comment added by AManWithNoPlan (talk • contribs) 19:31, 30 June 2020 (UTC)
- As I also said on CitationBot's page, I agree with CaptainEek and am in favor of whatever can make citationbot usable again soonest. The changes in the unlock request sound fine to me. I think y'all are making mountains out of molehills here. (Since it was mentioned I looked at the block log and I am surprised it was so short for such a widely used bot.) If you guys want to figure out what the consensus is for the minor details of what it does that's fine, as long as you leave the bot running while you do. Iamnotabunny (talk) 15:57, 1 July 2020 (UTC)
- Sorry, I have no experience with bot approvals, but I have three questions:
- Don't we have a rule that says every bot has to have an operator? Who is the operator for Citation Bot? It seems the listed operator is not active, AMWNP is a "maintainer" not an operator, and HB is neither. Which human being is responsible for this bot?
- Once we figure out #1, that person needs to answer whether or not they are willing to modify the code to stop the bot from removing the |url= parameter from citations. It's kind of a yes or no.
- If the answer is yes, then that can be done, and then the bot can be unblocked, and conversations about the |url= parameter can continue (probably necessitating an RFC).
- If the answer is no, then the bot stays blocked, and we move right to the RFC.
- If there is no operator, that seems to be like the first priority. How do we have a bot running with no operator? Levivich [dubious – discuss] 17:32, 1 July 2020 (UTC)
- @Levivich: The human responsible for the bot is (sysop) Smith609, whose last edit was June 10. --Mdaniels5757 (talk) 18:17, 1 July 2020 (UTC)
- AManWithNoPlan is one of the two maintainers designated by Smith609 to act on their behalf. Headbomb {t · c · p · b} 21:36, 1 July 2020 (UTC)
- Above, AManWithNoPlan said
I appreciate not being called an "operator": thank you ... I fix bug in the bot and work hard to find bugs. So, I would say I am a software engineer and on very rare occasions a bot runner. "operator " is a highly technical term, so that is smith and only smith.
I don't want to speak for AMWNP or put words in his mouth, but it does not sound to me like he is acting on behalf of Smith as operator. In my opinion, an active editor needs to take responsibility for the bot's operation before the bot can be unblocked. Any bot. Someone needs to answer question #2; I'm not sure if it's fair to put it on the shoulders of a volunteer who may not want to assume that responsibility (and should not be forced to). Levivich [dubious – discuss] 21:57, 1 July 2020 (UTC)
- Above, AManWithNoPlan said
- AManWithNoPlan is one of the two maintainers designated by Smith609 to act on their behalf. Headbomb {t · c · p · b} 21:36, 1 July 2020 (UTC)
- Mdaniels5757, yeah but hasn't opined on this anywhere AFAICT, even though the ANI thread was opened on June 7 and the bot was blocked on June 8. So... as of right now we have a bot with an inactive operator? I mean, three weeks isn't that long of a time, especially given what's been going on in the world, but I don't see how we unblock a bot with no active operator, at least until the operator returns or someone else steps up to be the operator even if only temporarily. Levivich [dubious – discuss] 18:28, 1 July 2020 (UTC)
- @Levivich: The human responsible for the bot is (sysop) Smith609, whose last edit was June 10. --Mdaniels5757 (talk) 18:17, 1 July 2020 (UTC)
I believe Smith is currently quite busy in the real-world and since you can still run the bot in tool mode, he's taking care of what really matters. As for operator, I am authorized to deal with bot related issues on his behalf as long as it is not controversial, but I do not have password access to the account, which is why I avoid the term operator. People as me to "shut the bot down" and I cannot do that. AManWithNoPlan (talk) 22:03, 1 July 2020 (UTC)
Replacing URLs with parameters has been an essential part of this bot's operation since as far back as I can remember. There are a lot of different URLs that should be listed as parameters. S2CID is just the latest one. But there are many more. DOI, PMID, PMC, JSTOR, CiteSeerX, HDL, arXiv, etc. — Chris Capoccia 💬 16:04, 4 July 2020 (UTC)
- So....CitationBot is just dead in the water until we get Smith609 back in action? Is there no other way we can compromise or get some sort of interim solution? CitationBot has been blocked almost a month now. CaptainEek Edits Ho Cap'n!⚓ 18:56, 4 July 2020 (UTC)
- actually, i think it's worse than that. i think we're actually waiting on CS1 group to roll out the programming with all the parameters like doi-access=free that links the title. because the people who are blocking the bot have this idea that titles must be linked. all the regular citation bot users don't care about linking the titles and are happy having a well-formatted citation with 5 or 6 parameters. — Chris Capoccia 💬 19:57, 4 July 2020 (UTC)
"all the regular citation bot users don't care about linking the titles"
- and that's the problem in a nutshell. None of the regular bot users are interested in the concerns raised and the consensus established about the linking by the majority of editors who do care about linking citation titles. Much as I'd like to see the bot operating again within its agreed parameters, I can't accept that it should be free to continue to remove title links whenever it adds another parameter. I find it astonishing that stopping it from unlinking the titles hasn't been done as an interim solution while we all wait for the folks at CS1 to make changes to the citation code that would re-establish the links automatically. --RexxS (talk) 20:27, 4 July 2020 (UTC)
- actually, i think it's worse than that. i think we're actually waiting on CS1 group to roll out the programming with all the parameters like doi-access=free that links the title. because the people who are blocking the bot have this idea that titles must be linked. all the regular citation bot users don't care about linking the titles and are happy having a well-formatted citation with 5 or 6 parameters. — Chris Capoccia 💬 19:57, 4 July 2020 (UTC)
"all the regular citation bot users don't care about linking the titles Nonsense. I've pushed for autolinking freely-available titles for years now. Most of us have. What we're against is pointless duplication of information and the hijacking of |url=
to give redundant links covered by identifiers like JSTOR, PMID, PMC, etc... Headbomb {t · c · p · b} 20:45, 4 July 2020 (UTC)
- I doubt Chris Capoccia will be happy with you calling his good-faith opinion "nonsense". You're going to have to accept that having a link on a citation title is not "pointless duplication of information" and the link provided on the citation title is not redundant to what is "covered by identifiers like JSTOR, PMID, PMC, etc." If the only way that can be done at present is through using
|url=
, then you need to allow editors to do that, rather that insisting that you know best and using a bot to enforce your view. --RexxS (talk) 20:58, 4 July 2020 (UTC)- i don't care one way or the other whether titles are linked. i do care about having a properly formatted citation and i do care about having parameters. it's completely stupid all the URLs that could be a parameter or all the people putting URLs that only work from their university because they like URLs instead of actually a parameter that could work for more people. URLs will break. Intention of a parameter is to be more durable, although you have to be real that some DOIs don't actually work. whether CS1 interprets several free parameters and adds a title URL is irrelevant to me. — Chris Capoccia 💬 22:25, 4 July 2020 (UTC)
- i also don't want URLs manually added that are going to some version of the same thing as a parameter. Way too many people putting in URLs of the whole link to PMC or URLs to the full PNAS that is the same as clicking the DOI and then clicking full. — Chris Capoccia 💬 22:28, 4 July 2020 (UTC)
"i don't care one way or the other whether titles are linked."
- but a lot of other editors do care, because that's what the readers see. Having a proper link on the title is not mutually exclusive with having identifiers linked. Most readers simply expect to be able to follow a link from the title and they have no idea what the identifiers are. That's the functionality that's relevant to me. I don't want a bot deciding for me (or for any other content editor) that the title shall not be linked. That's a decision for an editor, not for a bot. --RexxS (talk) 22:47, 4 July 2020 (UTC)- in my mind, this is the same question as whether books should use ISBNs or URLs. ISBNs go to Book Sources and are greatly preferred over URLs. If people can figure out how to click on ISBN, they can figure out how to click on other things. It's not that hard. — Chris Capoccia 💬 23:31, 4 July 2020 (UTC)
Chris, Headbomb, I read above that you don't think titles should be linked. I do. How about we have an RFC to see if consensus is with you, or with me? Levivich [dubious – discuss] 00:13, 5 July 2020 (UTC)
- if CS1 can parse parameters and create a title link, I don't care one way or the other on that. what i don't support is people manually adding URLs that are pretty much the same thing as something that should be a parameter in order to force title linking. — Chris Capoccia 💬 00:50, 5 July 2020 (UTC)
- Yes, I read it the first time :-) How about an RFC to see if consensus is with you -- about having a URL when there is already a parameter -- or not? Levivich [dubious – discuss] 01:26, 5 July 2020 (UTC)
- sure whatever. although supposedly there was already something like that that started this situation. — Chris Capoccia 💬 03:19, 5 July 2020 (UTC)
- Wikipedia:Village pump (proposals)/Archive 167#Auto-linking titles in citations of works with free-to-read DOIs? I guess that leaves deciding the venue and neutral statement? Levivich [dubious – discuss] 04:23, 5 July 2020 (UTC)
- Title should not be linked via
|url=
when those|url=
are redundant with identifiers because those URL take the place of free versions (which may or may not exist). We already had those RFCs and multiple discussions on Help talk:CS1 and elsewhere, the consensus being documented at, e.g. Template:Cite journal#Identifiers (quoting "When an URL is equivalent to the link produced by the corresponding identifier (such as a DOI), don't add it to any URL parameter but use the appropriate identifier parameter, which is more stable and may allow to specify the access status. The |url= parameter or title link can then be used for its prime purpose of providing a convenience link to an open access copy (as in, at least accessible to everyone for free) which would not otherwise be obviously accessible."). When those identifiers have free full versions (e.g. when|doi-access=free
is set), then the title should be linked if no other|url=
is provided. And when the next template update rolls around, this will happen automatically. There's no point in having|url=https://worlcat.org/0123456
instead of/along with|oclc=0123456
. Book of Stuff OCLC 0123456 makes it clear you will end up at the OCLC website when you click on the OCLC link. You'll have no idea where you'll land when you click on the first link in "Book of Stuff OCLC 0123456". Headbomb {t · c · p · b} 14:43, 5 July 2020 (UTC)- Same response I gave to Chris above: Yes, I read it the first time. I understand your position, as I'm sure by now you understand mine. I'm also sure you'll agree with me that the best venue for an RFC is not here, in this discussion, so there's not much point to discussing the merits of the url parameter any further here.
- Have you any thoughts as to what the best venue is, and what the neutral RFC statement should be? Levivich [dubious – discuss] 18:55, 5 July 2020 (UTC)
- Levivich, I'm thinking WP:VPT, or maaaybe WP:VPR. CaptainEek Edits Ho Cap'n!⚓ 19:16, 5 July 2020 (UTC)
- Title should not be linked via
- Wikipedia:Village pump (proposals)/Archive 167#Auto-linking titles in citations of works with free-to-read DOIs? I guess that leaves deciding the venue and neutral statement? Levivich [dubious – discuss] 04:23, 5 July 2020 (UTC)
- sure whatever. although supposedly there was already something like that that started this situation. — Chris Capoccia 💬 03:19, 5 July 2020 (UTC)
- Yes, I read it the first time :-) How about an RFC to see if consensus is with you -- about having a URL when there is already a parameter -- or not? Levivich [dubious – discuss] 01:26, 5 July 2020 (UTC)
It could also just be part of a BRFA. CaptainEek Edits Ho Cap'n!⚓ 19:28, 5 July 2020 (UTC)
- @Levivich: the recent RfC on auto-linking was at Wikipedia:Village pump (proposals) and that was well-attended, but VPP may be better if the question is phrased as a policy/guideline. I'm interested in two issues concerning linking of citation titles: (1) linking of titles to a full, free text; and (2) linking of titles to the best online source available. I appreciate that having two questions would risk confusion, so perhaps addressing the first point is necessary before considering the second. --RexxS (talk) 19:37, 5 July 2020 (UTC)
- @RexxS, Levivich, Headbomb, and AManWithNoPlan:, How does this sound for a neutral RfC statement: "The recent block of CitationBot has raised concerns over how we link to citations. How should the titles of citations link to sources? How would that change if a distinct identifier (such as a DOI, PMC, PMID, etc.) is also provided in the source?"
- I'll be honest, this issue is pretty complicated and I don't think I entirely get it, so feedback is welcome. But someone had to do something, this has just sat for 2 weeks with nothing happening. CaptainEek Edits Ho Cap'n!⚓ 19:51, 20 July 2020 (UTC)
- @CaptainEek: I'd prefer to keep an RfC to a simple question like "Should the title in a citation be linked to the best online source whenever available?". You could use a Background section to provide secondary information, such as the bot's activities and our prohibition on linking to copyright violations. If that has consensus, as I maintain it has, then the consequences should be easy to work out. --RexxS (talk) 20:04, 20 July 2020 (UTC)
- Sorry for being out of thr loop but I have my day job, covid-19 keeping us insane, teaching a college class temporarily, and working real hard at adding lots of testing to the Citation Bot. The Bot is now much stronger. AManWithNoPlan (talk) 21:27, 20 July 2020 (UTC)
- @CaptainEek: Rexxs's wording is awful, because it implies links to non-free identifiers are "the best online source available". What the title should link to is to free versions of record when available. That's the consensus, and it will not change following an RFC. What it shouldn't linked to are paywalled/database links (e.g. PMID 123465) redundant to identifiers which usurp free links to versions of records. But if you want to create another RFC on the issue, your wording is fine. Headbomb {t · c · p · b} 22:18, 20 July 2020 (UTC)
- Yet another attack from Headbomb. The question I suggested speaks to a fundamental principle that he is frightened of. There is already a consensus that free versions should be linked from the citation title – although Citation bot failed to respect that, and should not be editing until it does. If no free version of a source exists, I believe that readers still want the best available online source to be linked from the citation title. An RfC will confirm that and that is why Headbomb doesn't want an RfC on the issue. Headbomb is quite wrong to think that he can ignore a consensus established by an RfC and he will do so at his peril. --RexxS (talk) 18:54, 21 July 2020 (UTC)
- @CaptainEek: Rexxs's wording is awful, because it implies links to non-free identifiers are "the best online source available". What the title should link to is to free versions of record when available. That's the consensus, and it will not change following an RFC. What it shouldn't linked to are paywalled/database links (e.g. PMID 123465) redundant to identifiers which usurp free links to versions of records. But if you want to create another RFC on the issue, your wording is fine. Headbomb {t · c · p · b} 22:18, 20 July 2020 (UTC)
- Sorry for being out of thr loop but I have my day job, covid-19 keeping us insane, teaching a college class temporarily, and working real hard at adding lots of testing to the Citation Bot. The Bot is now much stronger. AManWithNoPlan (talk) 21:27, 20 July 2020 (UTC)
- CaptainEek, thanks for moving the ball forward. Seems to me there are three basic issues:
- should the titles in citations be linked, and if so, to what (in other words, what should be in
|url=
) - should the titles in citations be linked if that link is a duplicate of an identifier (DOI, PMC, PMID, etc.) (in other words, should we have
|url=
even when it is duplicative of, e.g.,|doi=
?) - under what circumstances should a title link be removed from a citation, by human or by bot (in other words, when should
|url=
be blanked) Levivich [dubious – discuss] 22:46, 20 July 2020 (UTC)
- should the titles in citations be linked, and if so, to what (in other words, what should be in
- @CaptainEek: I'd prefer to keep an RfC to a simple question like "Should the title in a citation be linked to the best online source whenever available?". You could use a Background section to provide secondary information, such as the bot's activities and our prohibition on linking to copyright violations. If that has consensus, as I maintain it has, then the consequences should be easy to work out. --RexxS (talk) 20:04, 20 July 2020 (UTC)
- @Levivich: the recent RfC on auto-linking was at Wikipedia:Village pump (proposals) and that was well-attended, but VPP may be better if the question is phrased as a policy/guideline. I'm interested in two issues concerning linking of citation titles: (1) linking of titles to a full, free text; and (2) linking of titles to the best online source available. I appreciate that having two questions would risk confusion, so perhaps addressing the first point is necessary before considering the second. --RexxS (talk) 19:37, 5 July 2020 (UTC)
It should also connect this to Template:Cite journal#identifiers rather than User:Citation bot. Headbomb {t · c · p · b} 23:09, 20 July 2020 (UTC)
- Block this bot until somebody fixes it. The bot is removing urls from titles if the cited article has a doi. This is not good. Many users don't bother to click on the doi, as they don't know what it is.
- Some urls in titles link to articles that users can access for free, while the doi's link is not free. Therefore, the bot is removing other editors' work and is preventing users from freely accessing information. If nobody corrects this quickly, I suggest that editors remove doi's from citations that have direct links to titles. That will provide a work-around that will stop this bot. Corker1 (talk) 20:35, 29 July 2020 (UTC)
- That won't do anything at all, Citation bot will still convert parameter URLs to use their dedicated parameters. If the DOI is freely accessibly, then add
|doi-access=free
. Headbomb {t · c · p · b} 09:47, 30 July 2020 (UTC)- @Headbomb, RexxS, AManWithNoPlan, and Chris Capoccia: I have created an RfC at VPR with Levivich's questions: [2] CaptainEek Edits Ho Cap'n!⚓ 00:32, 5 August 2020 (UTC)
- @Corker1: The bot has only ever removed urls that go to exactly the same place as the doi. So the entire premise of your comment above is incorrect. Free urls added to supplement non-free dois are not being removed by the bot and have not been removed by it. —David Eppstein (talk) 00:06, 31 August 2020 (UTC)
- @David Eppstein: If the result of the bot altering a citation is that the title link is removed, then it's malfunctioning. There is now a clear consensus that editors believe that the title should be linked when the source is available online. There never was any doubt that when a free source is available online, then the title should be linked. If the bot needs to add
|doi-access=free
in order to maintain the link, then it must do so. If it's incapable of doing that, then it should not be removing anything from the citation. --RexxS (talk) 19:02, 31 August 2020 (UTC)
- @David Eppstein: If the result of the bot altering a citation is that the title link is removed, then it's malfunctioning. There is now a clear consensus that editors believe that the title should be linked when the source is available online. There never was any doubt that when a free source is available online, then the title should be linked. If the bot needs to add
- That won't do anything at all, Citation bot will still convert parameter URLs to use their dedicated parameters. If the DOI is freely accessibly, then add
In this edit a title link was removed, one which diod not duplicate a DOI or other such, but it was a link mto a worldcat entry, whoch i don't think is proper for a title link in any case, as it does not go to any version of mthe source text. DES (talk)DESiegel Contribs 22:34, 31 August 2020 (UTC)
Please see: Wikipedia:Administrators' noticeboard#Citation bot again Lev!vich 16:37, 18 September 2020 (UTC)
User:AnomieBOT dropping information from substituted templates
This is not a formal request to have User:AnomieBOT blocked or anything, just a heads-up that there is a problem with it, it has been reported to the author, and they have responded in a way that makes me think they're unwilling to fix the problem.
See User talk:AnomieBOT/Archive 12#Losing_information for the details, but the summary is that the bot's template substitution drops encyclopedic information (such as a reference author's name from the articles' wikitext because it is performed even when a Module:Check for unknown parameters would throw an error. My suggestion is that the templates be fixed to include this check; the bot should be fixed not to substitute in case of an error (I've reviewed the bot's source code to check this is feasible and straightforward; it is), and not to substitute templates which do not check for unknown parameters.
The important point here is that "simply" fixing the templates does not undo past damage, or prevent future damage when templates change again. Someone will have to use whatever logs there are of AnomieBOT's activities to go over past changes and correct them, but there's little point in doing so while the underlying issue persists.
(Feel free to discuss this in the appropriate place or ignore it as you see fit; but I don't see how responses along the lines of "the bot is just following human instructions" are relevant to the issue. Of course it is: that code should be changed).
Eelworm (talk) 07:01, 14 October 2020 (UTC)
- As you've already been told, the issue is with the template, not the bot. — JJMC89 (T·C) 07:07, 14 October 2020 (UTC)
- And as I've already responded, I disagree with that opinion. The template does need modifying, but the data shouldn't be lost for the time period up to the point at which the template is modified. Eelworm (talk) 10:43, 14 October 2020 (UTC)
- This seems to me to be a case of WP:MOLEHILL. People use incorrect parameters to templates all the time, for various reasons. AnomieBOT's substing preserves the visual result of the template being substed. If the unused parameter actually is "encyclopedic information" as claimed, that should be determined at the template's talk page and the people there should figure out some way for the information to be included in the template's output. Anomie⚔ 11:13, 14 October 2020 (UTC)
- If your point is that AnomieBOT doesn't perform very many substitutions in the article namespace, that's correct; however, of those it does perform, a large proportion is incorrect.
- Again, the problem is that any fixes to the template would not be retroactive. As it is, I estimate there are hundreds of edits which remove the author of a reference or its access date from the article's wikitext. I claim that's clearly encyclopedic information, and if it doesn't constitute a mountain I wonder what does.
- You say that people use incorrect parameters to templates "for various reasons". I don't think "so the template gets substituted and the incorrect parameter removed" is among those.
- Again, this is easy to fix for the future, once and for all, and then we can go back and clean up the "molehill", by going through thousands of edits and identifying which ones need to be reverted. As it's a mere molehill, I assume you're volunteering for that part. (More seriously, I don't think the reference to WP:MOLEHILL is at all appropriate when talking about hundreds or thousands of articles.)
- As for the claim that the substitution "preserves the visual result of the template being substed", that is correct only as long as the template isn't modified afterwards. By the very nature of substitutions, later fixes to the template will no longer improve the visual result of the template invocation, and the extra information that was previously present (but wasn't being displayed) is lost in the chaos of the article history. Eelworm (talk) 11:52, 14 October 2020 (UTC)
- Eelworm, there are certain templates that are required to be substituted, and AnomieBOT just enforces those. People putting in wrong information that the template won't render anyways, is not Anomie's problem. He is running an approved task and it is doing exactly what it is designed to do. Substituting templates requiring substitution. —CYBERPOWER (Chat) 14:01, 14 October 2020 (UTC)
- (edit conflict) You make a lot of unsupported claims there. Let's look at some real numbers. The "abruf" parameter was added to de:Vorlage:Internetquelle at 2019-03-22T01:42Z. Since that time, AnomieBOT has made 697,084 edits. 74,261 of them were TemplateSubster edits. 1,126 substituted {{Internetquelle}}. 516 involved
|abruf=
, affecting 461 pages (350 articles, 38 draft pages, 68 user pages, and 5 talk pages). Your claim thata large proportion [of AnomieBOT's edits] is incorrect
seems incorrect.I see no indication that de:Vorlage:Internetquelle contains an "author" parameter that is not supported by Template:Internetquelle, as you implied in your post. As for the access dates, I'm unsure how useful they really would have been since the people adding them apparently didn't check whether they were being rendered. But you're welcome to go through the list and add access dates to any you think need them.And yes, AnomieBOT preserves the visual result of the template being substed at the time of substitution. That's exactly what everyone else involved expects and intends. If they wanted any arbitrary undefined parameters to somehow be preserved in the wikitext too, they'd do that when setting the template up for substing. Anomie⚔ 14:15, 14 October 2020 (UTC)- Perhaps, templates with extra parameters on pages that have been edited recently AND are drafts/sandboxes should be skipped. AManWithNoPlan (talk) 14:47, 14 October 2020 (UTC)
- Eelworm has been told at a previous discussion the way to stop this potentially invalid substing:
If there is a problem with substing of a particular template, stop it from being substed until the problem is fixed.
I'm pretty sure that the documentation for {{Internetquelle}} is not protected. If Eelworm does not feel comfortable editing that documentation, they can place an edit request on the template's talk page. – Jonesey95 (talk) 15:39, 14 October 2020 (UTC)
- Eelworm has been told at a previous discussion the way to stop this potentially invalid substing:
- Perhaps, templates with extra parameters on pages that have been edited recently AND are drafts/sandboxes should be skipped. AManWithNoPlan (talk) 14:47, 14 October 2020 (UTC)
- Thanks for all the responses; I admit that the estimate I gave is just a rough estimate, and it seems better numbers are available (but not easily, to me). I remain convinced that the problem is precisely as I described it and that damage is being done to Wikipedia by treating it as a bot playground. What I learned was that this seems to be an individual opinion opposed by a broad consensus, and it's thus better to discuss this on my user talk page or elsewhere (please ping me if you still think a response from me would be useful). Eelworm (talk) 06:31, 15 October 2020 (UTC)
@Anomie: Doing a quick check of some edits, this clearly changed the visual output (before: authors are listed; after: authors aren't listed). Surely that isn't the bot's intended functon. Headbomb {t · c · p · b} 07:03, 15 October 2020 (UTC)
- Headbomb, issue is the template I think.
|auteur1=
is only supported in Template:Lien web in the live format, it doesn't exist in the subst format, so it's omitted on subst. Template error, rather than bot error, I think? ProcrastinatingReader (talk) 12:52, 15 October 2020 (UTC)- Exactly. Why is there even a separate "live mode" and "subst mode" for that template? Anomie⚔ 14:00, 15 October 2020 (UTC)
- From my experience, substed templates with a lot of parameters generally have that happen so that there aren't dozens of blank parameters being placed on an article when it is subst, but (again, in general, haven't looked at this template) often that template coding will mess up the wrapper and/or code that is meant to be displayed until such time as the template is subst (i.e. it might not pass a parameter correctly, or end up with extraneous pipes or #if statements, etc). Primefac (talk) 16:13, 16 October 2020 (UTC)
- Exactly. Why is there even a separate "live mode" and "subst mode" for that template? Anomie⚔ 14:00, 15 October 2020 (UTC)
Proposed change to template
Before I go and do it, I'd like to seek approval to change {{BotTrial}}. Currently, it says to send it to trial, and either the # of edits or days to run it. I would like to add the sentence Please provide a link to the relevant contributions and/or diffs when the trial is complete.
I want to add this because I just dealt with a bunch of BRFAs where I couldn't figure out which contribs were relevant to the trial because the bot had run other tasks in the meantime.
I know I could just state the above every time I approve a task for trial, but at that point isn't that what templates are for? Primefac (talk) 16:53, 16 October 2020 (UTC)
- Seems fine to me. — xaosflux Talk 16:55, 16 October 2020 (UTC)
- Primefac, I’m going to need to see template change proposition form 435 in triplicate before I can consider this change. —CYBERPOWER (Trick or Treat) 02:03, 17 October 2020 (UTC)
- Great idea. Enterprisey (talk!) 02:10, 17 October 2020 (UTC)
- No, it must go trough quintriplicate approval! One day it's a minor improvement to a template used on a technical workflow, the next day, it's deleting the main page. Headbomb {t · c · p · b} 02:13, 17 October 2020 (UTC)
- I.e. go ahead. Headbomb {t · c · p · b} 02:31, 17 October 2020 (UTC)
- Thanks all, done. I put the same message at {{BotExtendedTrial}} as well for completeness. Primefac (talk) 10:29, 17 October 2020 (UTC)
- I.e. go ahead. Headbomb {t · c · p · b} 02:31, 17 October 2020 (UTC)
Proposal: Cosmetic Bot Day
Wikipedia:Village_pump_(proposals)#Cosmetic_Bot_Day_(CBD) -- GreenC 15:44, 25 October 2020 (UTC)
Double redirects on TE-protected templates
Previous discussion: Template_talk:Infobox_Swiss_town#Edit_request
I've seen this a few times now. A template is moved/redirected, and has redirects of its own, and double redirects are created. Sometimes these are easy to spot and fix by hand, but (as in this case) it can sometimes turn into a real tangled mess. We rely too much on double redirect bots generally, and forgetting to update these manually inadvertently causes large-scale wiki-wide disruption. I requested at the previous discussion if we could discuss giving either Xqbot or EmausBot template-editor privileges so they can fix double redirects on TE-protected templates. (Operators: @Xqt and Emaus:)
EmausBot seems to edit template-space: contribs
Xqbot as well: contribs
I'm guessing both are likely failing on TE-protected due to edit error, rather than explicitly skipping TE-protected by a check, so I suspect just granting the bots the rights will make them fix the double redirects, but I haven't checked the source for either bot yet to confirm. As far as trust goes, as a concern granting perms was raised there, both are long-time bot operators with many edits (locally & globally), certainly both qualified developers and operators. Xqt also seems to be a dewiki crat and Pywikibot dev. So I don't think that's much of a problem. Wanted to gather BAG thoughts on this? ProcrastinatingReader (talk) 01:28, 31 October 2020 (UTC)
- Without commenting on the merits of this proposal, I've fixed the only double redirect in template space that was listed at Special:DoubleRedirects, and it was indeed template protected. --DannyS712 (talk) 03:21, 31 October 2020 (UTC)
- I see no issue with this, but will wait a day or two (since it is the weekend) for others to see this and comment. Primefac (talk) 22:36, 1 November 2020 (UTC)
BAG nomination
Hi! This is a procedural notification that I've requested to join the Bot Approvals Group. Your comments would be appreciated at the nomination page. Thanks, ProcrastinatingReader (talk) 01:56, 17 November 2020 (UTC)
- Aha, saw that coming. Wait, I'll be there to give you company! – SD0001 (talk) 16:05, 17 November 2020 (UTC)
Another BAG nomination
I have started another BAG nomination as above. Comments appreciated at the nomination page. – SD0001 (talk) 17:09, 17 November 2020 (UTC)
Bot to track the activity of fully automatic bots
Please see the BRFA Wikipedia:Bots/Requests for approval/SDZeroBot 9. Thanks, – SD0001 (talk) 12:05, 14 November 2020 (UTC)
- A bot that tracks all and only those which do not track themselves? — Wug·a·po·des 03:19, 15 November 2020 (UTC)
- I was thinking who watches the watchers. --Izno (talk) 14:37, 15 November 2020 (UTC)
- @Izno: That link doesn't point where you're expecting it to. * Pppery * it has begun... 14:42, 15 November 2020 (UTC)
- Unless it was a subtle nod to the fact that botops are indeed gods. Primefac (talk) 15:03, 15 November 2020 (UTC)
- ... But why. Who thought that was a good idea. --Izno (talk) 15:22, 15 November 2020 (UTC)
- Oh. This is awkward. --Izno (talk) 15:28, 15 November 2020 (UTC)
- I'm so confused, but I like the sound of being a god. ProcrastinatingReader (talk) 15:31, 15 November 2020 (UTC)
- Oh, and here I thought pppery thought that Izno meant to point to Watchmen. Primefac (talk) 15:41, 15 November 2020 (UTC)
- No, but that's one of the more-obvious pop culture references to the Latinism. --Izno (talk) 15:48, 15 November 2020 (UTC)
- The pedant in me is obliged to point out that Watchmen is a reference to JFK's "we are the watchmen on the walls of world freedom", not to the Latinism. ‑ Iridescent 15:59, 15 November 2020 (UTC)
- True, but the tagline for the film (and the graffiti) is "who watches the Watchmen". Primefac (talk) 16:07, 15 November 2020 (UTC)
- The pedant in me is obliged to point out that Watchmen is a reference to JFK's "we are the watchmen on the walls of world freedom", not to the Latinism. ‑ Iridescent 15:59, 15 November 2020 (UTC)
- No, but that's one of the more-obvious pop culture references to the Latinism. --Izno (talk) 15:48, 15 November 2020 (UTC)
- I asked "who thought it was a good idea to target the Star Trek article". Guess who. (And when.) --Izno (talk) 15:48, 15 November 2020 (UTC)
- Oh, and here I thought pppery thought that Izno meant to point to Watchmen. Primefac (talk) 15:41, 15 November 2020 (UTC)
- I am a noob at this game, really. Heard about the bot that controls bots? – SD0001 (talk) 11:14, 16 November 2020 (UTC)
- I'm so confused, but I like the sound of being a god. ProcrastinatingReader (talk) 15:31, 15 November 2020 (UTC)
- Oh. This is awkward. --Izno (talk) 15:28, 15 November 2020 (UTC)
- @Izno: That link doesn't point where you're expecting it to. * Pppery * it has begun... 14:42, 15 November 2020 (UTC)
- I was thinking who watches the watchers. --Izno (talk) 14:37, 15 November 2020 (UTC)
Determining the specificity required for userspace-editing bots
Hello! Me and the RedWarn team have recently considered placing the RedWarn script (currently located at User:Ed6767/redwarn.js) into a script in its own user: User:RedWarn (a blocked user, of which we are currently requesting access for at the moment), in order to create a continuous integration system (since all patches, including emergency ones, need to pass through Ed6767 first, which can be a problem if he is unavailable). Before proceeding however, we went over the bot policy to ensure that we're doing things right, and have stumbled upon some tricky wording that needs clarification. According to the bot policy on bot accounts:
The account's name should identify the bot function (e.g. <Task>Bot), or the operator's main account (e.g. <Username>Bot). In all cases, it should be immediately clear that the edits are made by an automated account, which is usually achieved by including Bot at the end of the account name.
Noticing this, we're now unsure if User:RedWarn would be able to edit under its current username due to a wording issue, with "usually" and "should" being used instead of strong "must"s (which we could have interpreted differently). To avoid possibly wikilawyering the wording in the future, I've come to ask this board instead: Can we simply disclose the nature of the edit (as a bot edit) in its edit summary, instead of through the username, as the policy only suggests the account naming convention, and does not necessarily require it? Hopefully this can clear out the confusion, and also set a precedent for other bot accounts which only edit within their own userspace. Many thanks! --Chlod (say hi!) 10:26, 23 November 2020 (UTC)
Note: This has been discussed in some form in Wikipedia:Bots/Noticeboard/Archive 4#Bot_name, however the bot process in 2009 has changed a lot from today's process, and the scope of the bot is different from ours.
- Chlod, is this going to be a bot, or is this going to be a shared account where multiple users can log in to perform maintenance on a script that is stored in said account's userspace? If it's the latter, then this isn't a bot issue. Primefac (talk) 10:31, 23 November 2020 (UTC) (please do not ping on reply)
- Not a shared account. Updates to the script will all be done automatically. This is definitely a bot, and I'm pretty sure I checked the noticeboard before making the thread. Chlod (say hi!) 10:33, 23 November 2020 (UTC)
- Fair enough, that's why I asked. If the account is clearly marked for the functions you describe, and it will only be editing its own userspace, then I see no issue. Primefac (talk) 10:39, 23 November 2020 (UTC)
- So, does "
clearly marked
" still mean we need to include theBot
in the name or are we in the clear to just have use the edit summary for demonstrating the nature of the edit (as a bot edit)? Chlod (say hi!) 10:43, 23 November 2020 (UTC)- I genuinely cannot see anyone complaining about a user called "RedWarn" editing its subpages, updating WP:Redwarn's scripts, with a clearly-indicated userpage, as being a problem. Primefac (talk) 10:47, 23 November 2020 (UTC)
- Alright then, thanks for the guidance! Chlod (say hi!) 10:50, 23 November 2020 (UTC)
- Although this probably isn't a bot needing approval, you may want to run this by WP:VPT. I'm not sure we have any other script that is updated automatically by an external CD and multiple users can push updates to it. Since this isn't a gadget but rather a userspace edit that's perhaps less of a concern, but I'd think that even then there needs to be some accountability of which user pushed an update (perhaps in edit summary), if the actual editor isn't the reviewer. Since, otherwise, only a user themselves and IAs can edit javascripts. Folks at VPT would be able to better advise. ProcrastinatingReader (talk) 11:38, 23 November 2020 (UTC)
- Alright then, thanks for the guidance! Chlod (say hi!) 10:50, 23 November 2020 (UTC)
- I genuinely cannot see anyone complaining about a user called "RedWarn" editing its subpages, updating WP:Redwarn's scripts, with a clearly-indicated userpage, as being a problem. Primefac (talk) 10:47, 23 November 2020 (UTC)
- So, does "
- Fair enough, that's why I asked. If the account is clearly marked for the functions you describe, and it will only be editing its own userspace, then I see no issue. Primefac (talk) 10:39, 23 November 2020 (UTC)
- Not a shared account. Updates to the script will all be done automatically. This is definitely a bot, and I'm pretty sure I checked the noticeboard before making the thread. Chlod (say hi!) 10:33, 23 November 2020 (UTC)
- RFC 2119 defines "should" as "there may exist valid reasons in particular circumstances to ignore a particular item, but the full implications must be understood and carefully weighed before choosing a different course." ―sportzpikachu my talkcontribs 10:33, 23 November 2020 (UTC)
- Not sure how that's relevant, or why you wouldn't just use a dictionary. (a small attempt at humour) Primefac (talk) 10:41, 23 November 2020 (UTC)
- RFC 2119 is usually the standard for clear definitions of "must/must not/should/recommended/should not/may" etc in tech. The definitions are so clear I've gotten accustomed to expecting those used everywhere. Example of the clarity as a result. Buttt it's probably a mistake to think that BOTPOL is strictly following that convention. ProcrastinatingReader (talk) 14:52, 23 November 2020 (UTC)
- Not sure how that's relevant, or why you wouldn't just use a dictionary. (a small attempt at humour) Primefac (talk) 10:41, 23 November 2020 (UTC)
I am feeling a little puzzled about the rationale here for this account. Right now it reads as if you intend to bypass Ed in some cases, and that's why you want the account. Am I misreading? If so, that would seem to be a concern from the interface admin perspective...
(As a note, while I do not know what goes on behind the scenes, Twinkle may be using the same or similar model.) --Izno (talk) 15:53, 23 November 2020 (UTC)
- There's a parallel discussion on my talk page that might answer your questions, I'll transclude it below to save a click.
From User talk:Primefac
|
---|
Hey PrimeFac, I noticed your block on User:RedWarn. The RedWarn team and I would like to use the account for abuse report management and for continuous integration. I've also asked CrazyBoy826, but my fellow developers inform me that they have attempted to contact him before to no avail. If CrazyBoy826 does not reply, do you mind resetting the account and sending a temporary password to
|
- I think that answers your question about who "owns" the account. Primefac (talk) 15:58, 23 November 2020 (UTC)
- Does indeed. --Izno (talk) 16:08, 23 November 2020 (UTC)
- So long as the only thing an account does is low-volume editing of its own userscipt, it doesn't run afoul of the bot policy - you don't have to call the account "bot" either. Normally non-"bot" accounts should never have shared access - so it would be an alt-account for someone and that should be declared as a legit-sock. If this script is getting very popular it should be considered to be moved to a community managed process in mediawiki space and/or turned in to a proper gadget. — xaosflux Talk 15:56, 23 November 2020 (UTC)
- We edit-conflicted, but see my comment above in reply to Izno. Primefac (talk) 15:58, 23 November 2020 (UTC)
- What would define a very popular script? Given RedWarn is in use by multiple admins, and all in over 300 editors (roughly), if a ballsup happens, that could have a drastic effect here. Should I open an RfC on Wikipedia:Village pump (technical) about making RedWarn a gadget and moving it into MediaWiki space? That might delay updates, but that has the benefit of making RedWarn a gadget, plus extra review and scrutiny from intadmins before the updates go live. If we do that, we'd need to scrap these plans regarding a CI bot, but, if allowed (but absolutely not be default), we can add an option (buried in RedWarn's preferences) to allow users to use the latest unreviewed version if they so choose and understand the risks of doing so. Ed talk! 17:44, 23 November 2020 (UTC)
- My opinion is that having the bot's username end in "bot" is preferred for easy identification of the account as a bot account, but not required. There are other ways to ensure that the account is easy to identify as a bot account. For instance, you could include hints in all edit summaries like "(bot edit)" or "(automated edit)". You could ensure that the account's user and user talk pages clearly indicate that it's a bot. If the bot is editing user talk pages, the messages you post there could include a notification that it's being generated by a bot. The point is to make it clear that this is a bot; I don't think anyone particularly cares how you achieve that, as long as you achieve it. ‑Scottywong| [speak] || 16:22, 23 November 2020 (UTC)
- Or the OP could just grab User:RedWarnBot and not have to worry about any of this. – Jonesey95 (talk) 17:25, 23 November 2020 (UTC)
- The OP in this thread is somewhat contradicting Ed, who has indicated (see above) that he will control the account and it will not be a "bot" account. Primefac (talk) 17:28, 23 November 2020 (UTC)
- There's been a lot of confusion about this but when it comes to matters of RedWarn, Ed's word takes precedence over mine, so we're going with his plan instead. Pretty much makes this thread finished. Chlod (say hi!) 17:31, 23 November 2020 (UTC)
- The OP in this thread is somewhat contradicting Ed, who has indicated (see above) that he will control the account and it will not be a "bot" account. Primefac (talk) 17:28, 23 November 2020 (UTC)
- Or the OP could just grab User:RedWarnBot and not have to worry about any of this. – Jonesey95 (talk) 17:25, 23 November 2020 (UTC)
Meta discussion: Refine global bot policy
Notification that some folks here may be interested in an ongoing discussion at meta on expanding the m:Bot policy to broaden its scope. Discussion is at meta:Requests for comment/Refine global bot policy. ProcrastinatingReader (talk) 22:54, 28 November 2020 (UTC)
- FWIW I think it's a good idea, but I also think it'd be appropriate to have new requests notified to this venue. There's a couple of thorns I see that I hope are clarified, and of course there's always the option of amending WP:GLOBALBOTS to opt-out, but I think if the thorns can be addressed to ensure these will only be genuinely uncontroversial tasks then it's not a good idea to knee-jerk opt-out. Though, given the requirement for already having a local flag on multiple wikis, I think it's likely said bot will have already been through enwiki's BRFA? ProcrastinatingReader (talk) 23:03, 28 November 2020 (UTC)
- I think you have it backwards. We'd have to amend WP:GLOBALBOTS to opt-in if we wanted to. Anomie⚔ 13:34, 29 November 2020 (UTC)
- I do indeed have it backwards. It's slightly confusing since the global bot policy that was approved here was, at the time, also only interwiki links (& double redirects). But it seems that discussion did explicitly say it would only be done for interwikis, rather than opting in for anything that would come in the future. ProcrastinatingReader (talk) 13:52, 29 November 2020 (UTC)
- I think you have it backwards. We'd have to amend WP:GLOBALBOTS to opt-in if we wanted to. Anomie⚔ 13:34, 29 November 2020 (UTC)
Overlapping bots?
Just a suggestion but would it be possible to just use one bot for citation template edits? My watchlist is lit up like a Christmas tree with User:Monkbot, User:Citation bot and User:WikiCleanerBot doing much the same thing. I have created a filter to hide bot edits but using this would miss the other edits that bots make. Or perhaps it's a problem with citation templates changing daily, don't use them myself? Nimbus (Cumulus nimbus floats by) 17:20, 2 December 2020 (UTC)
- They're all similar but different tasks. Monkbot is repairing deprecated or invalid parameters. Citation bot is fixing the references themselves, and WikiCleanerBot is fixing general errors (some of which are inside the cite templates). Primefac (talk) 17:56, 2 December 2020 (UTC)
- Likewise User:JCW-CleanerBot is focusing on doing
|journal=
-related cleanup. Different bots for different tasks, sometimes they overlap, sometimes they don't. Headbomb {t · c · p · b} 18:40, 2 December 2020 (UTC)- Well, it's a bit irritating, can you folks not get together and work out the best way to minimise watchlist clutter? I could hide bots all the time but then I would miss the mistakes and other strange things that bots sometimes do. Nimbus (Cumulus nimbus floats by) 18:47, 2 December 2020 (UTC)
- Wikipedia is a volunteer thing, and different bot works in different, often incompatible, ways. If you really hate specific bots, you can check WP:HIDEBOTS and hide them from your watchlists without removing bots in general. Headbomb {t · c · p · b} 18:50, 2 December 2020 (UTC)
- I feel ya, Nimbus. I'm not particularly thrilled with Citation bot right now, as it's cluttering up my own watchlist on occasion, but on the whole I've just learned to mark those as read without really looking at the diff itself. Primefac (talk) 19:48, 2 December 2020 (UTC)
- I didn't mention the humans using AWB as well! As long as some disturbance is noted which I think it has been. Nimbus (Cumulus nimbus floats by) 20:09, 2 December 2020 (UTC)
- I feel ya, Nimbus. I'm not particularly thrilled with Citation bot right now, as it's cluttering up my own watchlist on occasion, but on the whole I've just learned to mark those as read without really looking at the diff itself. Primefac (talk) 19:48, 2 December 2020 (UTC)
- Wikipedia is a volunteer thing, and different bot works in different, often incompatible, ways. If you really hate specific bots, you can check WP:HIDEBOTS and hide them from your watchlists without removing bots in general. Headbomb {t · c · p · b} 18:50, 2 December 2020 (UTC)
- Well, it's a bit irritating, can you folks not get together and work out the best way to minimise watchlist clutter? I could hide bots all the time but then I would miss the mistakes and other strange things that bots sometimes do. Nimbus (Cumulus nimbus floats by) 18:47, 2 December 2020 (UTC)
- Likewise User:JCW-CleanerBot is focusing on doing
Cosmetic Bot Day (CBD)
CBD proposal has closed support, but in such a complex way it might as well be oppose due to the excessive work placed on bot ops. There won't be anyone volunteering to make a bot like that it's way too difficult technical and procedural. There have been requests to leave it up to BAG to manage. My sense is that unless the community can trust BAG to manage this, then it should be closed oppose. There should not be a separate bot process outside the BAG system. -- GreenC 05:30, 2 December 2020 (UTC)
- I think comments along the line of Enterprisey and Majavah were quite interesting if it could be pulled off, but perhaps a long term goal. That being said, those conditions do just say "I propose" so I don't really think the closer intended them to be binding, especially as no participants raised those criteria or concerns. Just my 2c. ProcrastinatingReader (talk) 05:41, 2 December 2020 (UTC)
- See also Wikipedia:Village pump (proposals)#Closure, since S Marshall doesn't seem inclined to undo the closure. Headbomb {t · c · p · b} 13:13, 2 December 2020 (UTC)
- That close even AGF smells like a supervote of someone opposed to the idea, which can even be seen by the first sentence
I'm somewhat surprised to find that rough consensus exists for a trial
. The conditions placed are absolutely not supported by the majority of editors which were in support of the proposal and those with the knowledge also stated that some of them are either not possible or require a huge amount of work which no one has volunteered to do. The closure should have no attempted to govern what was not in his mandate to do. --Gonnym (talk) 13:26, 2 December 2020 (UTC)
- That close even AGF smells like a supervote of someone opposed to the idea, which can even be seen by the first sentence
- See also Wikipedia:Village pump (proposals)#Closure, since S Marshall doesn't seem inclined to undo the closure. Headbomb {t · c · p · b} 13:13, 2 December 2020 (UTC)
Testing a new bot
I'm working on a new bot to add short descriptions to some of the pages that currently lack them, and will be applying for bot approval for ShortDescBot in the next week or two. I want to make sure I do things properly, and there's one thing I find a bit unclear in WP:BOTPOL. It is OK, before seeking formal approval, to make a small number of live assisted edits, in a similar way to AWB? (I am approved to use that). I would manually check each and every edit before it's made, and of course immediately correct any error. If so, can I do that with my normal user account or with the bot account? MichaelMaggs (talk) 10:33, 3 December 2020 (UTC)
- That's okay. You should do these with your normal account. ProcrastinatingReader (talk) 10:35, 3 December 2020 (UTC)
- Many thanks for the quick response. Much appreciated. MichaelMaggs (talk) 10:36, 3 December 2020 (UTC)
Another quick question if I may. I understand that Pywikibot's editing speed has to be restricted as defined by the bot approval, but what about reading the API? I see that sometimes the server adds a delay of its own accord, so am I normally OK to read perform read-only runs leaving the user-config defaults at minthrottle=0 and maxthrottle=60? MichaelMaggs (talk) 18:05, 7 December 2020 (UTC)
- Reading data is outside of the scope of BOTPOL. The only restrictions are MediaWiki's rate limits, enforced by the software, which I don't think apply to reading data. afaik the only guidance is: mw:API:Etiquette#Request_limit, summarised with
be considerate and try not to take a site down
. ProcrastinatingReader (talk) 18:17, 7 December 2020 (UTC) - While reading the API, as long as you're sending the requests serially (one after the other, rather than several at a time), there's no need to use any rate-limiting or throttling. While against the etiquette, as long as you're doing it for a short period only, it's also ok to send requests parallely with a reasonable concurrency. I have used this for testing bots when I don't want to wait a lot of time to check the results. After testing stage, I set the concurrency back to 1. – SD0001 (talk) 18:41, 7 December 2020 (UTC)
- That's very helpful, thanks very much. I'm not going to be doing anything in parallel, so the server should be able to cope. MichaelMaggs (talk) 19:17, 7 December 2020 (UTC)
I've started a basic page at WP:COSDAY to outline/brainstorm possible ways to handle things and coordinate the trial/gather feedback. It's a work in progress, so should be not be considered finalized or ready to be more widely advertised. I'm mentioning it here to get some early feedback on it so we can adjust things. Mostly looking for BAG input at this point, but everyone is welcome to comment. Headbomb {t · c · p · b} 19:51, 7 December 2020 (UTC)
BAG nomination requirement tweaks
Just a notice that I removed the requirement to post a notice at WT:BOTS (diff) in favour of WT:BOTPOL. WT:BOTS hasn't been a hub of discussion in years now that WP:BOTN is the central place, and WT:BOTPOL is a lot more relevant since that is the policy page most people concerned with the interpretation of bot policy would actually watch.
- WT:BAG explaining why they would be a good member of the team and outlining past experience, and then should advertise the discussion at WP:AN, WP:VPM,
WT:BOTS→ WT:BOTPOL and WP:BOTN.
Feel free to revert if you object. Headbomb {t · c · p · b} 23:18, 1 December 2020 (UTC)
- Maybe VPM should be changed to VPT as well? VPT has more active watchers, and the people watching it are probably more likely to be interested than those watching/regular at VPM. ProcrastinatingReader (talk) 23:21, 1 December 2020 (UTC)
- If we change things, and I'm not saying we should, I'd go WP:VPP over WP:VPT personally. VPT, while technical, is mostly about technical issues with templates, modules, the software/html of the site, etc. VPP is at least policy-related. Headbomb {t · c · p · b} 23:48, 1 December 2020 (UTC)
- VPT is more than that. It's the one well-watched page for all things technical, and bots are of course technical. My own BAG nomination received zero comments in the first two days after I notified all prescribed venues. Then I posted another notification at VPT following which there were 5 comments within hours. – SD0001 (talk) 05:59, 2 December 2020 (UTC)
- If we change things, and I'm not saying we should, I'd go WP:VPP over WP:VPT personally. VPT, while technical, is mostly about technical issues with templates, modules, the software/html of the site, etc. VPP is at least policy-related. Headbomb {t · c · p · b} 23:48, 1 December 2020 (UTC)
- Since we're talking about it, the notification to AN seems useless (besides for admin bots, maybe). --Izno (talk) 14:08, 2 December 2020 (UTC)
- Useless maybe, but it is a highly watched board. 913 recent talk page watchers, compared to 611 of VPR, 561 of VPT, 301 of VPM, or 128 of this page. Aside from the Main Page or advertising on watchlists I'm not sure there are better venues to get attention from likely interested individuals. Or well, there is always ANI (1,253 watchers) ProcrastinatingReader (talk) 14:18, 2 December 2020 (UTC)
- I believe the idea was to let a variety of forums know about the nomination, so that BAG didn't become a clique of self-selected people. This mostly satisfies people who watch AN as a preventative measure against the abuse admin powers, or something. With the logic that this lets them monitor BAG for similarly appointing grossly incompetent people. If we want more advertising, we could always add BAG nominations to the current admin/crat nomination templates, although that might have some unintended consequences of increasing drama. Headbomb {t · c · p · b} 16:56, 2 December 2020 (UTC)
- Regarding that last point... yeah, pretty much no. BAG technically gives no user rights, so while it is a good position to give a rubber stamp for a bot task, I don't think it merits the scrutiny of an RfX (and the drama that comes with it). Primefac (talk) 18:25, 2 December 2020 (UTC)
- I believe the idea was to let a variety of forums know about the nomination, so that BAG didn't become a clique of self-selected people. This mostly satisfies people who watch AN as a preventative measure against the abuse admin powers, or something. With the logic that this lets them monitor BAG for similarly appointing grossly incompetent people. If we want more advertising, we could always add BAG nominations to the current admin/crat nomination templates, although that might have some unintended consequences of increasing drama. Headbomb {t · c · p · b} 16:56, 2 December 2020 (UTC)
- Useless maybe, but it is a highly watched board. 913 recent talk page watchers, compared to 611 of VPR, 561 of VPT, 301 of VPM, or 128 of this page. Aside from the Main Page or advertising on watchlists I'm not sure there are better venues to get attention from likely interested individuals. Or well, there is always ANI (1,253 watchers) ProcrastinatingReader (talk) 14:18, 2 December 2020 (UTC)
- Before this gets archived, final thoughts on changing WP:VPM to either WP:VPT or WP:VPP? My preference is the former, since it seems generally only technically-involved people comment on BAG noms, so there's an increased chance of participation from a VPT notification than a VPP one I feel. ProcrastinatingReader (talk) 20:01, 31 December 2020 (UTC)
Please share your experience with bots on Wikipedia!
Hi there!
We are researchers interested in understanding how bots are created and managed on Wikipedia. We admire the culture of collaboration that you have built here, and are interested in it as a model that follows the ideals of the “old” Internet and collective governance structures. So we would love to talk to editors, BAG group members, bot operators and enthusiasts and hear about your experience. Please respond to us here, leave us a message in our talk pages or email us if you are interested in sharing your experience! We’ll reach out and set up a 30-45 minute interview over the platform of your choice.
Stay safe and happy holidays!
Bei Yan (talk page) Assistant Professor, Stevens Institute of Technology
Virginia Leavell (talk page) PhD Candidate, University of California, Santa Barbara — Preceding unsigned comment added by Momobay (talk • contribs) 22:58, 17 December 2020 (UTC)
- Just letting everyone know this is a legit research group, and they have interviewed me on September 21st. They're very nice people, and I would encourage anyone that reads this to bring their own viewpoints and perspective to the table. Headbomb {t · c · p · b} 00:22, 18 December 2020 (UTC)
- @MBisanz, Xaosflux, Anomie, Maxim, MaxSem, and SQL: pinging you specifically because you're all dinosaurs with a lot more knowledge about pre-bot and early-bot policy days than me. My knowledge of bots start around July 2008, but hadn't paid much attention to events that surrounded bots in those days. Headbomb {t · c · p · b} 20:16, 18 December 2020 (UTC)
- Nice initiave. -- Magioladitis (talk) 21:30, 31 December 2020 (UTC)
An upgrade to Cluebot NG
I've been working on a new vandalism detection system for some time now. While the system that I have created appears not to be as good as Cluebot NG overall, it does have some strengths where Cluebot is weak. For example, it uses a grammar check rating to assign each edit a score between -1 and 1 as a measure of whether an edit makes an article's grammar better or worse. This is a pretty good predictor of whether or not an edit is vandalism.
My idea is to install my new system as a supplement to Cluebot NG. In order to do that, I will need to use the confidence score that Cluebot outputs as an input to my new vandalism detector. The new bot will use this information in combination with some other predictors I have derived (like grammar check) to catch some vandalism that Cluebot misses. We should be able to do this without increasing the overall ratio of false positive to true positives. I am training my bot on two datasets called the PAN-WVC-10 and PAN-WVC-11. To finish my project, all I need is Cluebot's confidence scores. Can someone here help me run Cluebot NG on these datasets? Sam at Megaputer (talk) 14:09, 6 December 2020 (UTC)
- One of the operators may be able to help: @Cobi, Rich Smith, and DamianZaremba: ProcrastinatingReader (talk) 15:29, 6 December 2020 (UTC)
- Thanks for the ping PR, @Sam at Megaputer: that is pretty good. I would suggest talking to @Cobi: about this, maybe use 'Email This User' to pop him an email as he is not always on Wikipedia - RichT|C|E-Mail 16:15, 6 December 2020 (UTC)
- Thanks to both of you! I have sent the email. Hopefully he will reply soon. Sam at Megaputer (talk) 16:30, 6 December 2020 (UTC)
- @Sam at Megaputer: if you haven't already, you may also want to check out mw:ORES and mw:ORES review tool. If you have come up with a new way to help score changes, you may be able to integrate to that system instead or as well. The benefit of ORES scoring is that it runs server side and can inject scoring points in to the feed that the existing other secondary checks (such as bots) use. — xaosflux Talk 19:55, 7 December 2020 (UTC)
- @Xaosflux: Thanks! My project may potentially have something to add to ORES also, so this is worth looking into. I noticed in my brief assessment of the tool the ORES system for evaluating the quality of an edit appears to be far less sophisticated than Cluebot's. For example, it uses a list of words commonly found in damaging/undamaging edits to predict whether an edit is good or not while Cluebot uses a Naive Bayes classifier. It may be possible to implement a similar system for ORES if I can talk to the right people. The challenge for me here is that I am not actually a programmer so much as I am a data analyst. The software I use makes it easy for me to perform the analysis, but I may need some help installing the result. Sam at Megaputer (talk) 21:03, 7 December 2020 (UTC)
- @Sam at Megaputer: if you haven't already, you may also want to check out mw:ORES and mw:ORES review tool. If you have come up with a new way to help score changes, you may be able to integrate to that system instead or as well. The benefit of ORES scoring is that it runs server side and can inject scoring points in to the feed that the existing other secondary checks (such as bots) use. — xaosflux Talk 19:55, 7 December 2020 (UTC)
- Thanks to both of you! I have sent the email. Hopefully he will reply soon. Sam at Megaputer (talk) 16:30, 6 December 2020 (UTC)
- Thanks for the ping PR, @Sam at Megaputer: that is pretty good. I would suggest talking to @Cobi: about this, maybe use 'Email This User' to pop him an email as he is not always on Wikipedia - RichT|C|E-Mail 16:15, 6 December 2020 (UTC)
- @Rich Smith: I'm still waiting to hear back from Cobi, but I have something that may interest you in the meantime. It is my understanding that Cluebot whitelists registered users with more than 50 edits and IPs with more than 250 edits since most vandalism comes from new users. I think that these numbers were chosen based on qualitive observation and without looking at any hard data? My research shows that IPs continue to vandalize at a rate of around 15% even up to their 500th edit, while the vandalism rate for logged in users falls below that number by the time they reach their third edit. For a registered user making their 50th edit, the probability that this edit is vandalism is only about 3%. So in conclusion, I think that the threshold for whitelisting should be much higher for IPs and much lower for registered users. We may be able to significantly increase the performance of Cluebot just by moving these thresholds around. Sam at Megaputer (talk) 21:21, 8 December 2020 (UTC)
- Now that's super interesting! Thank you very much for making this. Enterprisey (talk!) 10:15, 2 January 2021 (UTC)
Edit confirmed protection
The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.
The FACBot reported an error when it tried to update Belarus, an article with WP:ECP. The FABot account is more than 30 days' old and has more than 500 edits, so I presume that ECP locks out bots, but cannot find any documentation to this effect. Does anyone know anything about this? Hawkeye7 (discuss) 02:51, 17 January 2021 (UTC)
- Wikipedia:User access levels#Extendedconfirmed states "This access is included and bundled in the bot and sysop (administrator) user groups", and FACBot is indeed a bot, so this is surprising. Are you positive the error was related to the EC protection? — The Earwig talk 05:28, 17 January 2021 (UTC)
- Yeah, per Special:ListGroupRights the "bot" flag has "extendedconfirmed" bundled into it. ProcrastinatingReader (talk) 11:42, 17 January 2021 (UTC)
- Definitely would help us debug if we knew the specific error, unless it was just a generic "could not edit page" (in which case we'll just have to guess). Primefac (talk) 12:22, 17 January 2021 (UTC)
- The Bot reported this error:
unable to edit 'Belarus' (3) : protectedpage: This page has been protected to prevent editing or other actions.
I confirmed that the protection was ECP, and was able to edit it myself to make the change the Bot was attempting. Hawkeye7 (discuss) 19:41, 17 January 2021 (UTC)
- The Bot reported this error:
- Definitely would help us debug if we knew the specific error, unless it was just a generic "could not edit page" (in which case we'll just have to guess). Primefac (talk) 12:22, 17 January 2021 (UTC)
- Yeah, per Special:ListGroupRights the "bot" flag has "extendedconfirmed" bundled into it. ProcrastinatingReader (talk) 11:42, 17 January 2021 (UTC)
- @Hawkeye7, The Earwig, ProcrastinatingReader, and Primefac: this is likely because the bot operator is using the api and has not included the 'edit protected pages' grant for the bot - I was able to duplicate this with a BotPasswords style grant and an API edit. — xaosflux Talk 14:57, 17 January 2021 (UTC)
- @Xaosflux: How is the 'edit protected pages' grant included? Hawkeye7 (discuss) 19:41, 17 January 2021 (UTC)
- @Hawkeye7: the operator picks what they want to be allowed in their grant, if using BotPasswords it is in Special:BotPasswords and if using OAUTH it is on meta:Special:OAuthManageMyGrants - log in to the webui as the bot and then view/update the grants as needed. — xaosflux Talk 21:24, 17 January 2021 (UTC)
- Note to perform an action it must be both allowed in the grant and the account must have the actual permission to perform the action. — xaosflux Talk 21:26, 17 January 2021 (UTC)
- Thank you! I have updated "edit protected pages" grant for the bot, and confirmed that the account does have actual permission. I guess when I set the Bot up I was not thinking of ECP. Hawkeye7 (discuss) 21:44, 17 January 2021 (UTC)
- Note to perform an action it must be both allowed in the grant and the account must have the actual permission to perform the action. — xaosflux Talk 21:26, 17 January 2021 (UTC)
- @Hawkeye7: the operator picks what they want to be allowed in their grant, if using BotPasswords it is in Special:BotPasswords and if using OAUTH it is on meta:Special:OAuthManageMyGrants - log in to the webui as the bot and then view/update the grants as needed. — xaosflux Talk 21:24, 17 January 2021 (UTC)
- @Xaosflux: How is the 'edit protected pages' grant included? Hawkeye7 (discuss) 19:41, 17 January 2021 (UTC)
Should Cewbot remove interlanguage link templates once local articles exist?
The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.
This task ([[3]]) destroys the information about which other wikis have relevant articles about the subject. For instance, if the article is created, and then one week later is deleted again, the {{ill}} link is gone and the work of the editor who originally placed it there has been binned.
Over at User talk:Kanashimi/Archive 1#Task 1 Convert interlanguage link templates with local article to wikilinks I was told this place would be a better venue for this discussion. Please note in particular the suggestion Have the bot add the recently-enabled
by davidwr.
|display=force
to {{Interlanguage link}} if the English page exists. You eliminate the expensive parser function call, you don't have the "what happens if the en-wiki article disappears" problem.
As far as I can see neither this specific proposal not the greater issue got a proper resolution. Cheers, CapnZapp (talk) 14:20, 12 January 2021 (UTC)
Extended content
|
---|
|
I have created a talk section over at the {{ill}} talk page if y'all rather discuss there than here: Template talk:Interlanguage link#Task 1 Convert interlanguage link templates with local article to wikilinks (again). Best, CapnZapp (talk) 14:56, 13 January 2021 (UTC)
- I don't see why the bot should eliminate any ill links at all. As someone who edits frequently in articles with interlanguage links and adds them where needed, I'm opposed to anything which reduces the links between sister projects (which is a stated goal of Wikipedia). In addition, regarding the case described at the ill talk page of an ill link to a page later deleted where the foreign links had already been removed by cewbot, such decisions should not be made by bot at all, but by people, who can better evaluate the likelihood of such an article being deleted. Failing that, if consensus supports bot removal, then I would urge an automatic delay of some interval to be discussed, perhaps a year, to make the "deleted target" case less likely to occur, but I think that would be second best. The template already hides foreign links when the article on en-wiki is created; this should be sufficient; the rest should be left to discussion among editors concerned, who can consider what's best for the article and its future and discuss amongst themselves to find consensus, and not to a bot. Mathglot (talk) 18:05, 13 January 2021 (UTC)
- I too regularly edit articles with {{ill}} links and create them often. They can be sometimes elaborate in order to deal with foreign scripts or naming conventions (Hungarian), so the resulting Wikitext often looks frightening. Therefore, removing those constructs once local articles exists is a welcome cleanup task. Can anyone provide an example of an {{ill}}-linked article that was converted to a local link and the article was then deleted? -- Michael Bednarek (talk) 01:21, 14 January 2021 (UTC)
- Mathglot, as was mentioned (and directly asked for at least once) in the archived discussion linked by the OP, and to follow along with Michael's comment: your line of thinking has not been substantiated, and as far as I can tell is little more than FUD and/or concern about hypothetical "what ifs". The bot already has a delay built in (a week if I remember correctly) and I see no reason to extend that or cease its operation. Primefac (talk) 01:49, 14 January 2021 (UTC)
- In addition,
{{ill|Alexandra-Therese Keining|sv}}
displays as Alexandra-Therese Keining with a link to the currently existing enwiki article and that does not help "links between sister projects". That single template adds 2 to the expensive parser function count—that quickly adds up and makes editing the page slower. An example of Cewbot removing ill is diff. Perhaps the bot's edit summary could show the wikitext that is removed so searching for "{{ill" in history would find it. Please do not pad out the wikitext with a hidden comment. Johnuniq (talk) 02:35, 14 January 2021 (UTC)
- In addition,
- Mathglot, as was mentioned (and directly asked for at least once) in the archived discussion linked by the OP, and to follow along with Michael's comment: your line of thinking has not been substantiated, and as far as I can tell is little more than FUD and/or concern about hypothetical "what ifs". The bot already has a delay built in (a week if I remember correctly) and I see no reason to extend that or cease its operation. Primefac (talk) 01:49, 14 January 2021 (UTC)
- I too regularly edit articles with {{ill}} links and create them often. They can be sometimes elaborate in order to deal with foreign scripts or naming conventions (Hungarian), so the resulting Wikitext often looks frightening. Therefore, removing those constructs once local articles exists is a welcome cleanup task. Can anyone provide an example of an {{ill}}-linked article that was converted to a local link and the article was then deleted? -- Michael Bednarek (talk) 01:21, 14 January 2021 (UTC)
The question here is why remove the template completely? It recently was given the ability to skip the computationally intensive part, with |display=force
. So far I see these arguments:
A) the resulting Wikitext often looks frightening
I sincerely do not see this as a good argument - one template isn't worse than another, and certainly editing a page with loads of complex references can be just as hard if not harder - finding the actual display text among all the citation data. Basically let us assume WP:COMPETENCE. I do acknowledge there might well be exceptional cases where massive amounts of {{ill}} templates turns the page into alphabet soup, but that's no good excuse for Wiki-wide bot intervention. Such pages should be fixed manually. Or at the very least, by a bot doing reversible work, or where all human-added data are retained.
B) the bot delays one week
One week is obviously not enough. Even the quick PROD procedure takes one week. An AfD definitely takes longer. Waiting one week is enough only for speedy delete cases. But this is really irrelevant - if the bot stopped using its current scorched earth approach, there would be no reason to delay at all. See proposal at the end.
C) removing those constructs once local articles exists is a welcome cleanup task
No it isn't. A template is not a "construct". It's a template, and Wikipedia uses loads of templates that aren't removed just to clean up the page. Please do not claim consensus for this conclusion when participating in a discussion directly contesting such a consensus! I started the original talk discussion (link at top of section) because I see it as problematic we have a bot that undoes the contributions of humans.
D) as far as I can tell is little more than FUD and/or concern about hypothetical "what ifs"
Stop belittling and dismissing other users, Primefac. Language of this sort is inflammatory and not taken in good faith. This is not the first time I have asked you to remain polite, and I will ignore it except to say that you are merely trying to shift the burden of proof away. Since the bot has gotten new capabilities YOU need to argue why the bot should keep destroying the work of human editors even though it can easily be modified not to, while still avoiding any computational load.
Let us change the bot's behavior to use this new |display=
parameter. Let us furthermore change the bot to change this parameter back if and when it detects the link has gone red again. This way, humans are freed from needless busywork, and we have one less bot that actively undoes human contributions. CapnZapp (talk) 10:18, 14 January 2021 (UTC)
- Short of an RFC deciding that interwikilinks are preferable to enwiki links, which I do not see for one second passing, Cewbot is functionning as intended, and is supported by consensus. This is the English Wikipedia, and we link to our own articles in the mainspace. {{ill}} is specifically for pages with no current enwiki pages. Once those pages exist, {{ill}} serves no purpose and should be removed. Headbomb {t · c · p · b} 20:30, 17 January 2021 (UTC)
- Thank you for your opinion. A few remarks, though: First, nobody is arguing to make "interwikilinks preferable to enwiki links". Second, "Cewbot is functionning as intended" is irrelevant for this discussion - that's a truism that doesn't support either viewpoint, meaning if we do arrive at a consensus to save {{ill}} templates then of course the bot is no longer working as intended, and so it will be tweaked. Finally, if I may: you aren't directly meeting the argument made here, but can I assume that, in your opinion, 1) {{ill}} templates aren't worth saving even after their computational load is eliminated, and 2) the work of editors adding {{ill}} templates isn't worth saving, User:Headbomb? Thanks for any clarification you can provide. CapnZapp (talk) 22:04, 17 January 2021 (UTC)
- "The work of editors adding {{ill}} templates isn't work saving." It is saved, in the form of a link to an enwiki article with interwikilinks to other languages available in their standard location. When there's an enwiki article, {{ill}} no longer serves any useful purpose endorsed by the community. And unless you can point to an RFC where the community has decided that {{ill}} should be preserved after an enwiki article exists, you will not find much support to alter Cewbot's behaviour. Headbomb {t · c · p · b} 00:28, 18 January 2021 (UTC)
- @Headbomb: If I hear you correctly, you are saying that we are "putting the cart before the horse" and run a full RFC on the question about whether keeping ill templates around or preserving the ability to automatically restore them if the en-wiki page is later deleted is desirable. Am I hearing you correctly? davidwr/(talk)/(contribs) 00:38, 18 January 2021 (UTC)
- I assume no one wants to convert every plain [[Example]] link to use {{ill}}. Therefore Cewbot is working well, and the only discussion to have is whether one week is sufficient delay. This noticeboard is not the place where that should be decided. Johnuniq (talk) 02:06, 18 January 2021 (UTC)
- Just to note - User:Headbomb appears to entirely ignore the entire point of this discussion: saying
It is saved, in the form of a link to an enwiki article with interwikilinks to other languages available in their standard location.
means they haven't read the actual complaint. Regards, CapnZapp (talk) 07:48, 19 January 2021 (UTC) - User:Johnuniq I don't follow your line of reasoning. Instead of me trying to interpret what appears to be quite absurd, could I ask you to explain? CapnZapp (talk) 07:53, 19 January 2021 (UTC)
- And finally, about the "this isn't the place" comment. I would like to make everybody aware that in my attempts to raise awareness of this issue, I was redirected here by User:Primefac - twice. So please User:Johnuniq: if you want to argue this isn't the place to have this discussion, do suggest which place is better. CapnZapp (talk) 07:59, 19 January 2021 (UTC)
- @Headbomb: If I hear you correctly, you are saying that we are "putting the cart before the horse" and run a full RFC on the question about whether keeping ill templates around or preserving the ability to automatically restore them if the en-wiki page is later deleted is desirable. Am I hearing you correctly? davidwr/(talk)/(contribs) 00:38, 18 January 2021 (UTC)
- "The work of editors adding {{ill}} templates isn't work saving." It is saved, in the form of a link to an enwiki article with interwikilinks to other languages available in their standard location. When there's an enwiki article, {{ill}} no longer serves any useful purpose endorsed by the community. And unless you can point to an RFC where the community has decided that {{ill}} should be preserved after an enwiki article exists, you will not find much support to alter Cewbot's behaviour. Headbomb {t · c · p · b} 00:28, 18 January 2021 (UTC)
- Thank you for your opinion. A few remarks, though: First, nobody is arguing to make "interwikilinks preferable to enwiki links". Second, "Cewbot is functionning as intended" is irrelevant for this discussion - that's a truism that doesn't support either viewpoint, meaning if we do arrive at a consensus to save {{ill}} templates then of course the bot is no longer working as intended, and so it will be tweaked. Finally, if I may: you aren't directly meeting the argument made here, but can I assume that, in your opinion, 1) {{ill}} templates aren't worth saving even after their computational load is eliminated, and 2) the work of editors adding {{ill}} templates isn't worth saving, User:Headbomb? Thanks for any clarification you can provide. CapnZapp (talk) 22:04, 17 January 2021 (UTC)
Running bot from AWS
The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.
I'm working on running Rick Bot from AWS rather than my own computer - with the eventual goal of running a generalized meta-bot capable of fetching the source for a list of bots from Wikipedia and running them (addressing the problem of bot disappearance). I'm using AWS serverless infrastructure for this which means the bot runs as a lambda function within AWS. I'm at the point where the bot is trying to edit a page and it turns out the entire AWS IP address range is blocked as an open proxy. I'm expecting I can control the apparent external IP address coming from AWS, but before getting too deep into this thought it might be worth asking if anyone else has run into this and has found a solution. Thanks! -- Rick Block (talk) 01:07, 9 January 2021 (UTC)
- This happens whenever you host a bot on a hosting provider, it happened to ProcBot as well. The IPs are blocked globally and locally. The bot flag grants local "ipblock-exempt" so if you're using your bot account this shouldn't happen. ProcrastinatingReader (talk) 01:19, 9 January 2021 (UTC)
- (edit conflict) Aren't bot accounts automatically exempt from IP blocks? * Pppery * it has begun... 01:19, 9 January 2021 (UTC)
- @Rick Block: are you trying to edit here on enwiki, or another project? The bot account should be able to bypass ip restriction. Are you using WebUI or API? For API are you using OAUTH or BotPassword? — xaosflux Talk 11:04, 9 January 2021 (UTC)
- I was trying it out with a user account on enwiki. Sounds like it should work with the bot account. I'm using pywikibot. Thanks everyone! -- Rick Block (talk) 01:41, 10 January 2021 (UTC)
- @Rick Block: if you need to do some temporary testing you may give an account you control ip-block exempt for the test (set an expiration on the grant). — xaosflux Talk 14:45, 12 January 2021 (UTC)
- I was trying it out with a user account on enwiki. Sounds like it should work with the bot account. I'm using pywikibot. Thanks everyone! -- Rick Block (talk) 01:41, 10 January 2021 (UTC)
Cyberbot II reported at AIV
Cyberbot II (talk · contribs · deleted contribs · nuke contribs · logs · filter log · block user · block log) – Not vandalism but malfunctioning [4][5]. The operator was notified days ago but is inactive. (CC) Tbhotch™ 19:06, 17 January 2021 (UTC)
- Looks like the typically database lag error. Bot isn't malfunctionning, but there should be a built-in protection for page-blanking. Headbomb {t · c · p · b} 19:19, 17 January 2021 (UTC)
- I've gone ahead and blocked the account. Whatever the issue is, it definitely shouldn't be blanking pages. Per User:Cyberbot II/Run/PC, it seems like the task shouldn't be running at all? Mz7 (talk) 19:20, 17 January 2021 (UTC)
- (non-admin) I have changed the status of the bot on User:Cyberbot II/Status to say "disable" - to reflect the bot's block by Mz7. P,TO 19104 (talk) (contribs) 23:57, 17 January 2021 (UTC)
- Cyberpower678 can you disable the code for this task manually and push an update, so your bots can be unblocked and can go back to doing their other tasks while you develop a fix for that particular task? ProcrastinatingReader (talk) 23:37, 1 February 2021 (UTC)
- ProcrastinatingReader, all bot tasks are affected. —CYBERPOWER (Around) 23:53, 1 February 2021 (UTC)
Emergency shutoff-compliant?
Cyberbot I (talk · contribs) has been blocked for the same problem of failing to respect its shutoff flag - Wikipedia:Administrators' noticeboard#CHUBot / Cyberbot I. This fires off a bunch of points and questions:
- grateful that someone's taken on a whole bunch of old/abandoned bots;
- are bots being tested for compliance to their shutoff flag as part of BAG approval?
- shouldn't failure of the validation result in NOT RUN rather than RUN? and
- is there a common shutoff system shared between bots, or are they all concocting their own methods?
--Thanks, -- Cabayi (talk) 16:44, 27 January 2021 (UTC)
- are bots being tested for compliance to their shutoff flag as part of BAG approval? Generally no; beyond code reviews (if they're done) we assume it is being implemented correctly, and operators are responsible for testing it (same as with {{nobots}} compliance). If this was a rampant problem, I could see that changing. Page-based shutoff is definitely a "nice to have" but almost never actually required as part of approval (see relevant policy).
- shouldn't failure of the validation result in NOT RUN rather than RUN? Agreed; if the bot is looking for a specific string to stop rather than start, this is a design flaw.
- is there a common shutoff system shared between bots, or are they all concocting their own methods? Generally no; bot ops are implementing their own systems. Bots typically do not share code, aside from certain libraries that may or may not provide a function for this, and are written in a variety of programming languages. There is no straightforward way to share this behavior across all bots unless it was handled server-side, and there is nothing in MediaWiki to support this.
- Thanks. — The Earwig talk 17:36, 27 January 2021 (UTC)
- (edit conflict)
are bots being tested for compliance to their shutoff flag as part of BAG approval?
Not that I'm aware; personally I don't check for that. Shutoff flags are generally down to operator discretion of whether to have one or not, unless the BRFA sets that as an operating condition for some particular reason. Generally, the assumption in a BRFA is that the bot operates in a technically sound manner, and that if future updates (or circumstances) change that then WP:BOTISSUE applies. is there a common shutoff system shared between bots, or are they all concocting their own methods?
bot operators decide how to code their kill switch. They'll use the Template:Emergency-bot-shutoff template to link to the page generally, but this is just an aesthetic similarity. How the bot actually shuts off (external tool to trigger off, such as what User:InternetArchiveBot requires, or onwiki using a run page (and in that case, whether the text is "true/false", "on/off", or "enabled/disabled"), is all down to what the bot operator wants. There's a general lack of consistency in this regard.
- (edit conflict)
- The purpose of a kill switch is mainly just to avoid having to block a bot and make it so it can't do any of its tasks. Hence if the switch doesn't work then it's not that big of a deal (the bot can just be blocked, so long as one can find an admin to do that). ProcrastinatingReader (talk) 17:38, 27 January 2021 (UTC)
- I will comment that (as an aside) the kill switch of User:InternetArchiveBot has long felt off to me. One needs to go to an external tool and grant OAuth perms with their Wikimedia account to be able to disable the bot, which is somewhat problematic. The last time it had to be disabled an editor was (not entirely unreasonably) unwilling to do that. Kill switches, especially for high-volume bots, should be onwiki or at worst not behind external OAuth. Plus, the page isn't even loading atm. ProcrastinatingReader (talk) 17:42, 27 January 2021 (UTC)
- It loads for me. * Pppery * it has begun... 21:30, 27 January 2021 (UTC)
- For me now, too. It wasn’t loading at the time I wrote the comment, though. ProcrastinatingReader (talk) 22:21, 27 January 2021 (UTC)
- It loads for me. * Pppery * it has begun... 21:30, 27 January 2021 (UTC)
- I will comment that (as an aside) the kill switch of User:InternetArchiveBot has long felt off to me. One needs to go to an external tool and grant OAuth perms with their Wikimedia account to be able to disable the bot, which is somewhat problematic. The last time it had to be disabled an editor was (not entirely unreasonably) unwilling to do that. Kill switches, especially for high-volume bots, should be onwiki or at worst not behind external OAuth. Plus, the page isn't even loading atm. ProcrastinatingReader (talk) 17:42, 27 January 2021 (UTC)
- Taking my bot as an example, I have my own custom written shutoff code. This code is implemented via a core file (imported by all my tasks), which reads if any content is in the page / if the page exists. If has content or is deleted, the task does not run. If the check fails, for any reason, the code treats this as if it is disabled. I recently tested my shutoff code (see the bot's notice of shutdown at my talk page).
- From what I can tell, there is no built in pywikibot code for a shutoff. It's also pretty uncommon from my experience to have a shutoff page for a bot, and tends to be used when a bot has many different tasks. From what I also understand, there is no common shutoff system used by any bot (especially there are a lot of different languages bots on enwiki use, plus many different frameworks used or not used). I would say that failure in validation code should result in shutdown instead of continuing. I've never seen a BAG ask for a bot to have their shutdown code tested. Dreamy Jazz talk to me | my contributions 23:19, 27 January 2021 (UTC)
Any other bot to reset the sandbox?
I'm concerned that Doggy54321 will just get stuck at the sandbox if there's no help with clearing it. Protecting has been proposed as a solution, but should be used as a last resort because, well, it's the sandbox. Should be one of the simplest possible bots to write - just replace the entirety of the sandbox with the boilerplate text, sleep for an hour, then do it again. Samsara 19:04, 28 January 2021 (UTC)
- @Samsara: I agree. There have been about 10 users in the past 6 hours who have removed the header of the sandbox, two of them were edit warring with me in turn. I’ve gotten 25+ edits from the sandbox today alone. D🐶ggy54321 (let's chat!) 19:14, 28 January 2021 (UTC)
- There are at least two other approved clearing bots: Hazard-Bot (active) and lowercase sigmabot II (hasn't edited in a couple weeks). I asked Sigma if he can check on it. — The Earwig talk 19:26, 28 January 2021 (UTC)
- I'm told that lowercase sigmabot II is now running again. — The Earwig talk 06:52, 29 January 2021 (UTC)