Manual:RunJobs.php/de-formal

This page is a translated version of the page Manual:RunJobs.php and the translation is 13% complete.
Outdated translations are marked like this.
Since MediaWiki 1.40, maintenance scripts should be invoked directly through maintenance/run.php instead. Invoking maintenance scripts directly will trigger a warning.

Details

Die Datei runJobs.php ist ein Wartungsskript, mit dem man die Abarbeitung der Warteschlange manuell forcieren kann. Normalerweise werden Jobs in der Warteschlange in Abhängigkeit von der Benutzerinteraktion mit dem Wiki ausgeführt (normale Apache-Anfragen) Standardmäßig ist die Ausführungsrate für Jobs 1:1. Diese Rate kann durch Anpassung von $wgJobRunRate in der Datei LocalSettings.php geändert werden. Beachten Sie, dass die Standardspeichergrenze für einen Job 150 MB beträgt, damit ein fehlerhafter Job nicht den gesamten Speicher des Servers beansprucht.

Möglicherweise brauchen Sie diese Funktion, wenn es zu wenig Datenverkehr in Ihrem Wiki gibt, um die Warteschlange abzuarbeiten oder wenn eine außergewöhnlich große Anzahl von Jobs abzuarbeiten ist. Beachten Sie jedoch, dass dies bei vielen Serverkonfigurationen dazu führen kann, dass Ihr Wiki schleppend oder gar nicht mehr reagiert, bis das Skript abgeschlossen ist. Es wird empfohlen, zuerst 50 oder 100 zu versuchen, um ein Gefühl für die Geschwindigkeit des Skripts zu bekommen, bevor Sie es ohne Parameter ausführen (dieses Skript arbeitet standardmäßig 10.000 Jobs bei jeder Ausführung ab) oder für mehr als ein paar hundert Jobs.

Beachten Sie auch, dass es auch möglich ist, die Job-Warteschlange vollständig zu entleeren, indem Sie die Tabelle job in Ihrer Wiki-Datenbank löschen, wenn Sie versehentlich ein Skript ausgeführt haben, das eine große Anzahl unerwünschter oder nicht benötigter Jobs in die Job-Warteschlange geladen hat Stellen Sie sicher, dass keine Jobs in der Warteschlange sind, die Sie noch brauchen, da die Warteschlange unwiederbringlich gelöscht wird.

Verwendung

Before MW 1.40
php maintenance/runJobs.php
MW 1.40+
php maintenance/run.php runJobs

Verwendung für Fortgeschrittene

Before MW 1.40
php maintenance/runJobs.php [--conf|--dbpass|--dbuser|--globals|--help|--maxjobs|--maxtime|--memory-limit|--nothrottle|--procs|--quiet|--server|--type|--wait|--wiki]
MW 1.40+
php maintenance/run.php runJobs [--conf|--dbpass|--dbuser|--globals|--help|--maxjobs|--maxtime|--memory-limit|--nothrottle|--procs|--quiet|--server|--type|--wait|--wiki]

Allgemeine Wartungsparameter

Option/Parameter Description
no parameters Will run all jobs currently in the queue
--help (-h) Display this help message
--quiet (-q) Whether to suppress non-error output
--conf Location of "LocalSettings.php", if not default
--wiki For specifying the wiki ID
--globals Output globals at the end of processing for debugging
--memory-limit Set a specific memory limit for the script, "max" for no limit or "default" to avoid changing it
--server The protocol and server name to use in URLs, e.g. https://en.wikipedia.org. This is sometimes necessary because server name detection may fail in command line scripts.

Script dependent parameters

Option/Parameter Description
--dbuser The DB user to use for this script
--dbpass The password to use for this script

Script specific parameters

Option/Parameter Description
--maxjobs Maximum number of jobs to run
--maxtime Maximum amount of wall-clock time (in seconds)
--procs Number of processes to use
--type Type of job to run. See $wgJobClasses for available job types.
--wait Wait for new jobs instead of exiting
--nothrottle Ignore the job throttling configuration
--result Set to "json" to print only a JSON response

Use limits

It is not recommended to have "runJobs.php" run indefinitely without any limits. Using --maxjobs on its own is insufficient, so it is best paired with --maxtime and/or --memory-limit. Typical usage involves periodic runs with at least one of the restrictions set to prevent it from running too long in one go.

Example

Before MW 1.40
php maintenance/runJobs.php --maxjobs 5  --memory-limit 150M --type refreshLinks
MW 1.40+
php maintenance/run.php runJobs --maxjobs 5  --memory-limit 150M --type refreshLinks
Before MW 1.40
/home/flowerwiki/public_html/w/maintenance$ php runJobs.php --maxjobs 5 --memory-limit 150M --type refreshLinks
MW 1.40+
/home/flowerwiki/public_html/w/maintenance$ php run.php runJobs --maxjobs 5 --memory-limit 150M --type refreshLinks

When the script is run, you may see an output like this, here including jobs from the refreshLinks queue:

2010-10-29 13:50:38 refreshLinks Daisies STARTING
2010-10-29 13:50:38 refreshLinks Daisies t=501 good
2010-10-29 13:50:38 refreshLinks Magnolias STARTING
2010-10-29 13:50:38 refreshLinks Magnolias t=501 good
2010-10-29 13:50:39 refreshLinks Heirloom_Roses STARTING
2010-10-29 13:50:39 refreshLinks Heirloom_Roses t=500 good
2010-10-29 13:50:39 refreshLinks Carnations STARTING
2010-10-29 13:50:39 refreshLinks Carnations t=501 good
2010-10-29 13:50:40 refreshLinks Tulips STARTING
2010-10-29 13:50:40 refreshLinks Tulips t=563 good


Possible issues

The job queue appears to be stuck

Under certain circumstances, "runJobs.php" may hang indefinitely. Some jobs may fail to get completed, clogging up the queue.

As noted above, it is best to prevent this from happening by providing the necessary flags, but if you do find yourself in this situation, ideally you should find the cause of the problem. Possible causes include

  • A missing PHP extension in the php.ini of the PHP being run from the command line.
  • A buggy extension.

No standard tools or methods are currently available to let you diagnose the issue.

Object caching

"runJobs.php" may hang if you have object caching enabled. If this happens, there is something you could try, with the caveat below in mind.

  1. Create another "LocalSettings.php" file with object caching disabled:
    $wgMainCacheType = CACHE_NONE;
    
  2. Then run runJobs.php, or run.php runJobs, with the --conf parameter to specify the location of the new LocalSettings.php file with caching disabled.

This approach is not recommended, since some jobs will purge objects from the object cache, which won't get purged because caching is disabled. This will result in some updates not being reflected on the wiki.

Terminate a running process

Sometimes, if you cannot find the problem and the job queue is creating overhead, you may have no other choice than to terminate it, possibly at the expense of deleting jobs you might need. If this is the case and you accept the risk, you can try to clear the job that you think is causing trouble, or clear the entire jobs table.

Note that on some control panels that use cronjob automation, clearing jobs may have no visible effect. The process initiated may still appear to hang even if there are no jobs left to execute.

Using a database administration tool
  • Go to your database administration tool (e.g. phpMyAdmin) and locate the job table.
  • If you're lucky, it may just be an active job that is causing trouble and that needs to be cleared. You can locate it by finding the row that has a hash value in the job_token column.
  • Repeat if necessary. If all else fails, clear the entire jobs table.
Using manageJobs.php

The maintenance script manageJobs.php does not lend you insights but it does let you delete jobs by group.

See also