Deactivated
This bot is no longer active on Wikipedia.

Function

edit

This bot patrols[1] newly created pages in the main space, and matches the contents against a web search. Pages found to contain a significant portion of text taken from another web page are tagged (and categorized) for human attention according to some guidelines:

  • Pages including little or no contents beyond that of the external page get a slightly more stern tag.
  • If the page is a copy of another Wikipedia page,[2] then the page is also tagged (but with a Wikipedia-specific tag).
  • Web sites can be whitelisted as having a permissive license.
  • If one of the accepted (configurable) attribution or permission tags is present on the page, then it is ignored.

If the 'bot modifies the newly created page in any way, then it will also leave a message on the creator's talk page, and on suspected copyright violations to bring attention to the apparent problem.

Additionally, the 'bot watches and/or updates a number of subpages under its user page for runtime configuration, directives and status information.[3]

Operation

edit

The CorenSearchBot is meant to function continuously, without direct human supervision.

Program

edit

CorenSearchBot is written in Perl and takes advantage of the new Mediawiki API for everything.

The source code can be perused here, but it's not pretty.

Footnotes

edit
  1. ^ It will, in fact, queue up every new page at creation but may defer reading it and (possibly) editing it for some time if the current Wikipedia load is too high. Regardless of the current load, there is a hard limit of one article processed every five seconds.
  2. ^ Copy-and-paste of Wikipedia pages are sometimes created as subtle vandalism
  3. ^ One read per change, 2-3 updates per hours.
  NODES
Story 1