Manual talk:RunJobs.php
Interpreting the Output
editWhat do these mean please?
- start
- end
- t
t is the time in milliseconds --Planetenxin (talk) 13:43, 17 April 2012 (UTC)
Duplicate jobs?
editIf a refreshlinks2 is done twice (in a row), naming the same template, is the second time redundant? If so, could the jobs table specify a unique key for each row? If not, why not? thanks.
Getting Timeout yy when running the runJobs.php
editMW: 1.17.2 I get
Timeout yy
when running php ./runJobs.php maintenance script
and I see that job queue is not decreasing. Is there anywhere a place to increase the timeout.
I use Semantic Mediawiki and many of the failed pages contains heavy queries, which may take 10 seconds to load. Does this have impact?
<?xml version="1.0"?> <api> <query> <statistics pages="7697" articles="6648" views="24585" edits="63858" images="327" users="35" activeusers="-1" admins="4" jobs="25270" /> </query> </api>
Error
editEverytime I try to run this i get the following errors:
PHP Notice: Undefined index: HTTP_USER_AGENT in /var/www/*****.com/public/w/extensions/MobileDetect/MobileDetect.php on line 27 PHP Notice: Undefined index: HTTP_ACCEPT in /var/www/*****.com/public/w/extensions/MobileDetect/MobileDetect.php on line 28
Any idea what the issue is? --Zackmann08 (talk) 14:08, 17 August 2012 (UTC)
- HTTP_USER_AGENT and HTTP_ACCEPT sound like they are these "request variables", which are set when you access a webpage with a browser. However, the maintenance script are run from shell and there these variables naturally are not set. The extension should be fixed to either not use these variables when in CLI mode or it should maybe be deactivated in CLI mode completely. --88.130.121.146 13:50, 26 February 2014 (UTC)
LoadBalancer
editI'm running runJobs.php via Cronjob like this:
php /path/to/my/wiki/w/maintenance/runJobs.php --maxjobs 1000
I added the following to my LocalSettings.php:
$wgDebugLogFile = "/path/to/log/mediawiki/debug-{$wgDBname}.log";
...and I keep getting these entries in my log files:
Start command line script /path/to/my/wiki/w/maintenance/runJobs.php [caches] main: MemcachedPhpBagOStuff, message: MemcachedPhpBagOStuff, parser: MemcachedPhpBagOStuff [caches] LocalisationCache: using store LCStoreDB Fully initialised Connected to database 0 at 12.34.56.78 LoadBalancer::reuseConnection: this connection was not opened as a foreign connection
Does anyone know what this means?
Thanks and cheers,
runJobs runs nothing, yet there are items in the job queue
editI checked the API to see how many jobs I had.
/api.php?action=query&meta=siteinfo&siprop=statistics&format=jsonfm
it says JOBS: 25
I run "PHP runjobs.php", and it processes with no actions
I go to the mysql database, look at the mw_job table. It has 25 records in it.
Am I safe to truncate mw_job because the runjobs is not picking up any?
I have had to terminate runjobs.php quite a few times over the last week, due to caching. -Bob
- Yes, you can truncate it. Maybe they're skipped because of previous failed attempts. In that case, they may have the
job_attempts
field of the job table with a number different than 0. In that case, updating that field to 0 may execute them again. In any case, the behavior of that field, how to reset it, or how to force to run such jobs through this script is not documented in any way :( --Ciencia Al Poder (talk) 10:43, 6 March 2015 (UTC)
- Thank you Ciencia Al Poder. The job_attempts field was 1 for all of them. I ran
Update mw_job set job_attempts=0 where job_attempts=1;
I re-ran the runjobs.php script, it ran with 0 jobs. I'm now just truncating the table Bob.spencer (talk) 16:53, 10 March 2015 (UTC)
- Thank you Ciencia Al Poder. The job_attempts field was 1 for all of them. I ran
I have had some success in getting stuck jobs to run (again) by setting job_attempts back to 0 and also clearing the job_token field. This doesn't always work, though. As you say, the documentation is lacking. --Darenwelsh (talk) 14:55, 25 August 2016 (UTC)
what does the output from --globals actually mean?
editWhat does the output from --globals actually mean? Should it just be listing Manual:Global object variables? Mine seems to run forever, with many "RECUSRION" lines. Is that normal? Tenbergen (talk) 04:13, 5 January 2017 (UTC)
- Those global objects often reference other globals, which may give those recursion issues. I think this is normal. --Ciencia Al Poder (talk) 10:38, 5 January 2017 (UTC)
runJobs does not close as expect when both --maxtime & --wait where used
edit0,30 2-19 * * * php /wiki/maintenance/runJobs.php --memory-limit=1000M --procs 2 --maxtime 1770 -q --wait
What I am expected is the runJobs keep alive 1770 seconds then kill itself. & runJobs where called every 30 mins (30*60=1800s).
But the fact is once --wait enabled all these runJobs will not stop themselves. Then the ram were quickly filled up, and them boom.
If I ran it without --wait , them the runJobs stop very often, way earlier than the 30 mins limit. If I shrink the maxtime to 295 and start the runJobs every 5 mins(5*60=300s). Then lots of job will be claimed but not actually run due to timeout.
Does anyone have a good strategy to get runJobs.php work? Thank you in advance! --Zoglun (talk) 04:03, 11 April 2017 (UTC)
- You can try using a wrapper bash script that only start a new instance of runJobs if there's no other instance of runJobs running, then you can keep the --wait. --Ciencia Al Poder (talk) 09:04, 11 April 2017 (UTC)
- Very interesting point which brings me to the following questions: How would such a bash script look like? Is it also possible to differentiate between jobs emitted by different wikis? On a machine I could imagine something like
ps aux | grep runJobs.php
to find processes but from there ...? Perhapsps aux | grep /path/to/wiki/maintenance/runJobs.php
? --91.65.247.224 21:53, 26 May 2017 (UTC)
- Very interesting point which brings me to the following questions: How would such a bash script look like? Is it also possible to differentiate between jobs emitted by different wikis? On a machine I could imagine something like
- On different wikis it may need different bash scripts for each wiki (or put some more logic on the bash script to differentiate each instance). I was thinking of this:
- Check if a file (wikiname) exists.
- If exists, check if a process exists with the same process id and the name is the current script
- If exists, exit
- If not, remove the file
- create the file placing the current pid as the contents of the file
- call runJobs.php
- remove the file
- --Ciencia Al Poder (talk) 15:18, 27 May 2017 (UTC)
- On different wikis it may need different bash scripts for each wiki (or put some more logic on the bash script to differentiate each instance). I was thinking of this:
It turns out that the runjob "early termination" & "claimed but not ran" problems were both caused by bug related to mysql in Extension:CirrusSearch, which is fixed in newest version. However, the "--wait" option still override the "--maxtime" option. What we eventually did is to monitor the job process with Monit. This process supervision tool will make sure there are at least 1 runJobs.php running for each wiki. --Zoglun (talk) 19:58, 29 June 2017 (UTC)
No output when running
editWhen I run this script, I get no output, and I am given the command prompt, yet there are some abandoned jobs in the queue. How do I fix this? MacFan4000 (talk) 20:14, 11 June 2017 (UTC)
- The script may just fatal out without any visible error. Enable error reporting and display_errors in php.ini, see Manual:How to debug. --Ciencia Al Poder (talk) 09:45, 12 June 2017 (UTC)
I get a PHP Warning upon running this script
edit- php maintenance/runJobs.php
- PHP Warning: array_merge_recursive(): Argument #1 is not an array in /var/www/html/includes/registration/ExtensionProcessor.php on line 294
Any ideas how I could resolve this? Is it necessary to run this before doing an upgrade? — Preceding unsigned comment added by HausaDictionary (talk • contribs)
- This warning seems to be caused by one of your extensions. However, the warning by itself shouldn't affect the run of maintenance/runJobs.php --Ciencia Al Poder (talk) 09:20, 22 May 2018 (UTC)
Run jobs in chunks
editRunning runJobs.php
can cause memory leaks. If that happens, this can help:
#!/bin/sh
#
# Run jobs in chunks
maxjobs=1000
while [ $(php /path/to/mediawiki/maintenance/showJobs.php) -gt 0 ]; do
echo "another round of $maxjobs jobs..."
echo "in 5"
sleep 1
echo " 4"
sleep 1
echo " 3"
sleep 1
echo " 2"
sleep 1
echo " 1"
sleep 1
php /path/to/mediawiki/maintenance/runJobs.php --maxjobs="$maxjobs"
done;
--Jamesmontalvo3 (talk) 15:00, 15 June 2018 (UTC)
"-gt
" stands for "greater than". I guess that "sleep 5
" will do the same (One will not get a nice count down though). --[[kgh]] (talk) 20:29, 19 June 2018 (UTC)
- I recommend the solution on Manual:Job queue#Simple service to run jobs. --Ciencia Al Poder (talk) 09:15, 20 June 2018 (UTC)
update Lemma
editActually i think some informations are missing / old
- The parameter --procs doesn't work native with windows systems since there is no php71-pcntl, it terminates with
Call to undefined function pcntl_signal()
. Long story short an info "doesn't work on native-windows-php" would be pleasing. Its needless to spend time to get this to work if theres no option to get running.
- The caveats information seems much outdated. i tried it to speed up the runjob and it reduces the speed by 4-5 times. So is there any need for this infomation today?