Manual:Pywikibot/refLinks

Copied from w:User:DumZiBoT/refLinks

He is converting bare external links in references into named external links.

Here are some examples of his work: [2], [3], [4] and here is what he is doing now.

He usually runs every time that a new XML dump is available. Processing a dump takes days : handling enwiki's March 15th dump took more than 120 hours, 5 days of uninterrupted work.

His owner is NicDumZ.

The idea

edit

References like these:

  • <ref>[http://www.google.fr]</ref>[1]
  • <ref>http://www.google.fr</ref>[2]

are converted into this:

  • <ref>[http://www.google.fr Google<!-- Bot generated title -->]</ref>[3]

They look like this:

  • The title which is used as the url title is the HTML title from the linked page. (from the <title> tag)
  • newlines, linefeeds, and tabs from titles are converted into a single space to avoid long titles. Extra spaces are also removed.
  • Titles containing ], several consecutive } or ' are handled correctly, converting some of the preceding characters to their html entities (This title enclose brackets [here])
  • When content-type is not text/html (medias, .doc, etc...), I can't automatically find a title, hence I only convert references to <ref>http://lien.org/doc.pdf</ref>.
  • Lengthy titles are arbitrarily truncated to 250 characters. When this happens, "..." is appended to the title.

Features

edit
  • Reads the titles from PDF files
  • If a dead link is found, it is tagged using {{Dead link }}
  • When no <references/> or {{Reflist }} is in the page, <references/> is appended.
  • When duplicate references are found (i.e. references having the exact same content) only the first is kept, and a refname is added to the others ( example )

Hey, you forgot some links!

edit

Some links may not be changed, even after DumZiBoT's run. These things may have occurred :

  • The HTML linked page has no title (rare, but happens).
  • DumZiBoT got an HTTP error while trying to get the page (see 4xx Client Error and 5xx Client Error). The link may be invalid, the page may not be available anymore, or may be protected. These links should be repaired or removed, but chances are that the error is temporary. Also, some pages, such as Google cache links, and Google books pages, give bots a 401/403 error although they're available. You may wish to try the Link checker tool to correct the problem.
  • Either the link or the html title is blacklisted.

Blacklists

edit
  • Link blacklist : for now, only w:JSTOR links are ignored, since for non-registered users JSTOR gives the message: "JSTOR: Accessing JSTOR". Please contact me if you think that a particular domain should get blacklisted
  • Title blacklist : Based on an original idea from Dispenser, I exclude links containing register, sign up, 404 not found, and so on.

Meta-data

edit

Why doesn't DumZiBoT include extra information, like access date, author, or publication, or use citation templates? Changing the citation style in an article cannot be done without gaining consensus. SEWilcoBot and RefBot were blocked for changing the citation style in articles.

And what about server load?

edit

The search for pages containing invalid references is made from the last XML dump. DumZiBoT only fetches from the servers pages that needed modifications at the time of the dump. (Some pages are downloaded but eventually do not need changes, because the references were fixed between the dump and the fetch.)

edit

No. Read this talk page archive for further explanations.

Where do I request DumZiBoT to go through a specific page?

edit

Nowhere. Just wait : DumZiBoT goes through every page that need a fix whenever a new dump is available.

Online tool

edit

However, thanks to Dispenser, you can manually run DumZiBot's script on a page or a modified script which makes more assumptions about references and formatting.

Where should I grumble report a problem?

edit

Does DumZiBoT still make any edits?

edit

No, DumZiBoT has not edited since June 2009.

w:fr:Utilisateur:DumZiBoT/liensRefs