No previous topics.


Could not create directory mwstore://localS3/public

edit

The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.


This is my configuration

$wgFileBackends[] = array(
    'name'        => 'myAWSfoldername',
    'class'       => 'AmazonS3FileBackend',
    'lockManager' => 'nullLockManager',
    'awsKey     ' => '*****',
    'awsSecret'   => '*****',
    'awsRegion'   => 'us-east-1'
);



$wgLocalFileRepo = array (
    'class'             => 'LocalRepo',
    'name'              => 'local',
    'backend'           => 'localS3',
    'scriptDirUrl'      => $wgScriptPath,
    'scriptExtension'   => $wgScriptExtension,
    'url'               => $wgScriptPath . '/img_auth.php',
    'hashLevels'        => 0,
    'deletedHashLevels' => 0,
    'zones'             => array(
        'public'  => array( 'container' => 'public' ),
        'thumb'   => array( 'container' => 'thumb' ),
        'temp'    => array( 'container' => 'temp' ),
        'deleted' => array( 'container' => 'deleted' )
    )
);

I keep getting this Could not create directory mwstore://localS3/public this is what I did

  1. I have created the LocalS3/public on the root and gave it full permission, I think the extension is super useful when it will work,
  2. It seems like the mwstore point to somewhere not clear to me.
  3. also I never got the files to show up on AWS server, any specific setting for the AWS container to make it work.

I would truly appreciate your help Laith (talk) 14:11, 1 June 2013 (UTC)Reply

Hi Laith,
I found what was causing the problem and will try to fix it as soon as I can.
Edit: Should probably mention that I fixed the problem. The extension should be working properly now. Thai (talk) 12:44, 14 July 2013 (UTC)Reply
I'm having the same kind of problem. My config is:
$wgAWSCredentials = array(
    'key' => '*****',
    'secret' => '*****'
);
$wgAWSRegion = 'us-west-2';
$wgFileBackends['s3']['containerPaths'] = array(
    'wiki_id-local-public' => 'public',
    'wiki_id-local-thumb' => 'thumb',
    'wiki_id-local-temp' => 'temp',
    'wiki_id-local-deleted' => 'deleted',
);
$wgLocalFileRepo = array (
    'class'             => 'LocalRepo',
    'name'              => 'local',
    'backend'           => 'AmazonS3',
    'scriptDirUrl'      => $wgScriptPath,
    'scriptExtension'   => $wgScriptExtension,
    'url'               => $wgScriptPath . '/img_auth.php',
    'zones'             => array(
        'public'  => array( 'container' => 'public' ),
        'thumb'   => array( 'container' => 'thumb' ),
        'temp'    => array( 'container' => 'temp' ),
        'deleted' => array( 'container' => 'deleted' )
    )
);
In other words, it is exactly like the recommended one in the extension page, except for the key, secret, region and containers. But when I try to upload a file, I get:
Could not create directory "mwstore://AmazonS3/public/d/da". LFS (talk) 12:30, 15 August 2013 (UTC)Reply
Hi Luis,
My configuration looks like the below...
$wgFileBackends['s3'] = array(
    'name'        => 'AmazonS3',
    'class'       => 'AmazonS3FileBackend',
    'lockManager' => 'nullLockManager',
    'awsKey'      => '*********',
    'awsSecret'   => '*********',
    'awsRegion'   => 'us-east-1',
    'containerPaths' => array(
        'foswiki-local-public'  =>'public',
        'foswiki-local-thumb'   => 'thumb',
        'foswiki-local-deleted' => 'deleted',
        'foswiki-local-temp'    => 'temp',
    )
);
$wgLocalFileRepo = array(
    'class'           => 'LocalRepo',
    'name'            => 'local',
    'backend'         => 'AmazonS3',
    'scriptDirUrl'    => $wgScriptPath,
    'scriptExtension' => $wgScriptExtension,
    'url'             => $wgScriptPath . '/img_auth.php',
    'zone'            => array(
        'public'  => array( 'container' => 'public' ),
        'thumb'   => array( 'container' => 'thumb' ),
        'temp'    => array( 'container' => 'temp' ),
        'deleted' => array( 'container' => 'deleted' )
    )
);
$wgImgAuthPublicTest = false;
Can you check if that works?
P.S. wiki_id is meant to be the name of your wiki. It won't break your site but I didn't mean for people to literally use it in their configuration. Thai (talk) 13:21, 15 August 2013 (UTC)Reply
Thanks for the quick reply! I tried your config, replacing the awsKey, awsSecret, awsRegion, containerPaths and the wiki_id. I got a similar error, but slightly different:
Could not create directory "mwstore://AmazonS3/local-public/c/c7".
It is slightly different because the "local-" wasn't there before. I don't know if that means anything to you. Thanks again for your help! LFS (talk) 13:45, 15 August 2013 (UTC)Reply
Hi Felipe / Luis,
I admit that I edited my configuration file prior to posting. I haven't had a look at my LocalSettings.php for a while (like a few weeks) so I forgot what some of the stuff did. The email you sent me regarding buckets reminded me of my folly.
You need to set the value of each element in the containerPaths array in wfFileBackends to the S3 bucket you are using. It should look something like this...
$wgFileBackends['s3'] = array(
    'name'        => 'AmazonS3',
    'class'       => 'AmazonS3FileBackend',
    'lockManager' => 'nullLockManager',
    'awsKey'      => '*********',
    'awsSecret'   => '*********',
    'awsRegion'   => 'us-east-1',
    'containerPaths' => array(
        'my_wiki-local-public'  => 'bucket_you_are_using_for_public',
        'my_wiki-local-thumb'   => 'bucket_you_are_using_for_thumb',
        'my_wiki-local-deleted' => 'bucket_you_are_using_for_deleted',
        'my_wiki-local-temp'    => 'bucket_you_are_using_for_temp',
    )
);
 
$wgLocalFileRepo = array(
    'class'           => 'LocalRepo',
    'name'            => 'local',
    'backend'         => 'AmazonS3',
    'scriptDirUrl'    => $wgScriptPath,
    'scriptExtension' => $wgScriptExtension,
    'url'             => $wgScriptPath . '/img_auth.php',
    'zone'            => array(
        'public'  => array( 'container' => 'public' ),
        'thumb'   => array( 'container' => 'thumb' ),
        'temp'    => array( 'container' => 'temp' ),
        'deleted' => array( 'container' => 'deleted' )
    )
);
 
$wgImgAuthPublicTest = false;
Thai (talk) 14:20, 15 August 2013 (UTC)Reply
Could you possibly post some kind of concrete example that doesn't need fill-in-the-blank replacements? MediaWiki is strange and undocumented territory for me, so I am absolutely confused as to what a container is, zones, the relationship between wgFileBackends and wgLocalFileRepo, and pretty much everything else.
Here is the bit of LocalSettings.php I'm using:
require_once( "$IP/extensions/AWS/AWS.php" );
$wgAWSCredentials = array(
    'key' => '${mw_aws_key}',
    'secret' => '${mw_aws_secret}'
);
$wgFileBackends['s3']['containerPaths'] = array(
    'local-public' => 'public.wikistatic.synhak.org',
    'local-thumb' => 'thumb.wikistatic.synhak.org',
    'local-deleted' => 'deleted.wikistatic.synhak.org',
    'local-temp' => 'temp.wikistatic.synhak.org'
);
 
// Make MediaWiki use Amazon S3 for file storage.
$wgLocalFileRepo = array (
    'class'             => 'LocalRepo',
    'name'              => 'local',
    'backend'           => 'AmazonS3',
    'scriptDirUrl'      => $wgScriptPath,
    'scriptExtension'   => $wgScriptExtension,
    'url'               => $wgScriptPath . '/img_auth.php',
    'zones'             => array(
        'public'  => array( 'container' => 'public' ),
        'thumb'   => array( 'container' => 'thumb' ),
        'temp'    => array( 'container' => 'temp' ),
        'deleted' => array( 'container' => 'deleted' )
    )   
);
$wgImgAuthPublicTest = false;
Whenever I try to upload a file, Special:Upload shows a message in red text: "Could not create directory "mwstore://AmazonS3/public/e/e6"."
I'm running MediaWiki 1.21.0 and the latest git master of the AWS extension. I've double and triple checked the aws credentials and am able to upload files via s3cmd. What am I missing here? 76.188.197.165 21:45, 18 August 2013 (UTC)Reply
I still can't get this to work. I'm following the instructions from the extension page and I'm getting:
Could not create directory "mwstore://s3/local-public/1/18".
I've tried all the variations in this thread too.
Also, does region matter? My S3 says:
Region: US Standard
This is what I have:
require_once("$IP/extensions/AWS/AWS.php");
// Configure AWS credentials
$wgAWSCredentials = array(
    'key' => 'xxxxx',
    'secret' => 'xxxxx'
);
$wgAWSRegion = 'us-east-1';
$wgFileBackends['s3']['containerPaths'] = array(
    'wiki_id-local-public' => 'public.domain.com',
    'wiki_id-local-thumb' => 'thumb.domain.com',
    'wiki_id-local-deleted' => 'deleted.domain.com',
    'wiki_id-local-temp' => 'temp.domain.com'
);
// Make MediaWiki use Amazon S3 for file storage.
$wgLocalFileRepo = array (
    'class'             => 'LocalRepo',
    'name'              => 'local',
    'backend'           => 's3',
    'scriptDirUrl'      => $wgScriptPath,
    'scriptExtension'   => $wgScriptExtension,
    'url'               => $wgScriptPath . '/img_auth.php',
    'zones'             => array(
        'public'  => array( 'url' => 'http://public.domain.com/' ),
        'thumb'   => array( 'url' => 'http://thumb.domain.com/' ),
        'temp'    => array( 'url' => 'http://temp.domain.com/' ),
        'deleted' => array( 'url' => 'http://deleted.domain.com/' )
    )
);
Luisdaniel12 (talk) 18:55, 11 October 2013 (UTC)Reply
thank you, been travelling , I will try that soon and update you, appreciate the response Laith (talk) 22:11, 24 October 2013 (UTC)Reply
I'm having the same issue too. Error says, "Could not create directory "mwstore://AmazonS3/local-public/0/05"."
Like Luisdaniel12, I too have tried all the configuration variations mentioned in this thread. I also noticed that there are 'zone' and 'zones', not sure which one is the correct. But I tried both anyways, and neither worked.
If someone has successful experience, can you post your example please?
Much appreciated! 173.244.203.182 07:56, 15 February 2014 (UTC)Reply
Same issue as the others on this thread, on MediaWiki 1.22. Has there been a resolution? I can put files into local-public fine via s3cmd.
Also as a heads up there is a typo in the current snippet:
        'temp'    => array( 'url' => 'http://some_s3_bucket_3.s3.amazonaws.com/' ),
        'deleted' => array( 'url' => 'http://some_s3_bucket_4.s3.amazonaws.com/' )
should be
        'deleted' => array( 'url' => 'http://some_s3_bucket_3.s3.amazonaws.com/' ),
        'temp'    => array( 'url' => 'http://some_s3_bucket_4.s3.amazonaws.com/' )
I also added
    'directory'         => $wgUploadDirectory,
to $wgLocalFileRepo due to a php notice in MW 1.22 (I don't have the exact notice any more, but it was an index error on includes/filebackend/FileBackendGroup.php line 83). Adobehill (talk) 22:44, 23 May 2014 (UTC)Reply
The discussion above is closed. Please do not modify it. No further edits should be made to this discussion.

UploadStashFileException

edit

The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.


Hello, I have Fatal exception of type UploadStashFileException when trying to upload a file.

Config is:

$wgAWSCredentials = array(
    'key' => '***',
    'secret' => '****'
);

$wgAWSRegion = 'eu-west-1';

$wgFileBackends['s3']['containerPaths'] = array(
    'wiki_id-local-public' => 'mysuperuniquename-public',
    'wiki_id-local-thumb' => 'mysuperuniquename-thumb',
    'wiki_id-local-deleted' => 'mysuperuniquename-deleted',
    'wiki_id-local-temp' => 'mysuperuniquename-temp'
);

// Make MediaWiki use Amazon S3 for file storage.
$wgLocalFileRepo = array (
    'class'             => 'LocalRepo',
    'name'              => 'local',
    'backend'           => 's3',
    'scriptDirUrl'      => $wgScriptPath,
    'scriptExtension'   => $wgScriptExtension,
    'url'               => $wgScriptPath . '/img_auth.php',
    'zones'             => array(
        'public'  => array( 'url' => 'http://mysuperuniquename-public.s3-eu-west-1.amazonaws.com/' ),
        'thumb'   => array( 'url' => 'http://mysuperuniquename-thumb.s3-eu-west-1.amazonaws.com/' ),
        'temp'    => array( 'url' => 'http://mysuperuniquename-temp.s3-eu-west-1.amazonaws.com/' ),
        'deleted' => array( 'url' => 'http://mysuperuniquename-deleted.s3-eu-west-1.amazonaws.com/' )
    )
);
Sasha bar (talk) 21:11, 21 October 2013 (UTC)Reply
Hi ,
I am facing the same problem. Have u fixed this issue? 122.164.83.119 06:12, 21 December 2013 (UTC)Reply
If you could provide more information about the error (like a full stack trace of the error) maybe it would be easier to know the reason. Read Manual:How to debug Ciencia Al Poder (talk) 10:37, 21 December 2013 (UTC)Reply
[aa6fbc3d] /index.php?title=Especial:Carregar_arquivo Exception from line 244 of /home/historia/public_html/includes/upload/UploadStash.php: Error storing file in '/tmp/phpKbA0jD': Não foi possível criar o diretório "mwstore://s3/local-temp/2/2b".
Backtrace:
  1. 0 /home/historia/public_html/includes/upload/UploadBase.php(844): UploadStash->stashFile(string, string)
  2. 1 /home/historia/public_html/includes/upload/UploadBase.php(857): UploadBase->stashFile()
  3. 2 /home/historia/public_html/includes/upload/UploadBase.php(866): UploadBase->stashFileGetKey()
  4. 3 /home/historia/public_html/includes/specials/SpecialUpload.php(340): UploadBase->stashSession()
  5. 4 /home/historia/public_html/includes/specials/SpecialUpload.php(433): SpecialUpload->showUploadWarning(array)
  6. 5 /home/historia/public_html/includes/specials/SpecialUpload.php(179): SpecialUpload->processUpload()
  7. 6 /home/historia/public_html/includes/SpecialPage.php(631): SpecialUpload->execute(NULL)
  8. 7 /home/historia/public_html/includes/SpecialPageFactory.php(488): SpecialPage->run(NULL)
  9. 8 /home/historia/public_html/includes/Wiki.php(298): SpecialPageFactory::executePath(Title, RequestContext)
  10. 9 /home/historia/public_html/includes/Wiki.php(602): MediaWiki->performRequest()
  11. 10 /home/historia/public_html/includes/Wiki.php(467): MediaWiki->main()
  12. 11 /home/historia/public_html/index.php(49): MediaWiki->run()
  13. 12 {main} 41.190.168.90 17:58, 30 June 2014 (UTC)Reply
The discussion above is closed. Please do not modify it. No further edits should be made to this discussion.

autoload.php is is missing

edit

The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.


Hi, I downloaded aws extension from git but there is no vendor folder 122.164.27.94 07:29, 6 December 2013 (UTC)Reply

Please help me there is no vendor folder Rajaraman (talk) 07:32, 6 December 2013 (UTC)Reply
Did you run php composer.phar install? Thai (talk) 02:27, 7 December 2013 (UTC)Reply
How do I do that? 27.59.106.159 12:01, 20 December 2013 (UTC)Reply
Through SSH or a Terminal session otherwise. James Martindale (talk) 16:34, 24 October 2016 (UTC)Reply
The discussion above is closed. Please do not modify it. No further edits should be made to this discussion.

Has anyone gotten this extension to work?

edit

The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.


If so, do you have a working config? Thank you. Needcaffeine (talk) 02:52, 24 March 2014 (UTC)Reply

The discussion above is closed. Please do not modify it. No further edits should be made to this discussion.

Not working - Don't bother downloading

edit

The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.


This extension does not work, you don't have to download it and waste your time. Tested with the current version of MediaWiki 1.23.2 and an older version 1.22.5 - both PHP 5.5.9 and MySQL 5.6.17-log on AWS instances. 2A02:810D:1140:394:285C:83B3:BC26:FD35 20:39, 2 August 2014 (UTC)Reply

The discussion above is closed. Please do not modify it. No further edits should be made to this discussion.

Internal error when using without configuring SQS

edit

The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.


If I try using this extension without configuring the SQS job queue (ie: I don't even want to use it at all and I omit config for it) then I get this exception:

[b279a293] /help/Contents MWException from line 116 of {testwiki}/extensions/AWS/sqs/JobQueueAmazonSqs.php: Amazon SQS error: Access to the resource https://sqs.us-west-2.amazonaws.com/ is denied.

Backtrace:

#0 {testwiki}/extensions/AWS/sqs/JobQueueAmazonSqs.php(301): JobQueueAmazonSqs->connect()
#1 {testwiki}/extensions/AWS/sqs/JobQueueAmazonSqs.php(202): JobQueueAmazonSqs->getAttributes()
#2 {testwiki}/extensions/AWS/sqs/JobQueueAmazonSqs.php(208): JobQueueAmazonSqs->doGetSize()
#3 {testwiki}/includes/jobqueue/JobQueue.php(184): JobQueueAmazonSqs->doIsEmpty()
#4 {testwiki}/includes/jobqueue/JobQueueGroup.php(283): JobQueue->isEmpty()
#5 {testwiki}/includes/jobqueue/JobQueueGroup.php(259): JobQueueGroup->getQueuesWithJobs()
#6 {testwiki}/includes/MediaWiki.php(635): JobQueueGroup->queuesHaveJobs(integer)
#7 {testwiki}/includes/MediaWiki.php(425): MediaWiki->triggerJobs()
#8 {testwiki}/index.php(41): MediaWiki->run()
#9 {main}
Daniel Friesen (Dantman) (talk) 06:22, 4 September 2015 (UTC)Reply
You need to assign appropriate permissions to the user role for SQS
In AWS console go to "identity and Access Management" then "users" . then find the user you supplied aws credentials for and then click "attach policy"
find AmazonSQSFullAccess and add it.
then user can create and read the required queues. Karlpietsch (talk) 04:00, 3 December 2015 (UTC)Reply
My friend,
If he doesn't "want to use it at all", we can assume he doesn't want to set any appropriate permission to the user role, don't you think?
I just had the same issue, and I don't want to set any appropriate permission to the user role, because I don't want to use SQS job queue at all (does it make sense?).
So, to anyone which would to remove the Exception, you need to unset the $wgJobTypeConf['sqs'] variable after loading the extension :
//$wgJobTypeConf['default']['class'] = 'JobQueueAmazonSqs'; => uncomment the day you want to use SQS job queue
unset($wgJobTypeConf['sqs']); => comment the day you want to use SQS job queue. 184.69.132.46 (talk) 23:30, 17 October 2016 (UTC)Reply
The discussion above is closed. Please do not modify it. No further edits should be made to this discussion.

Exception & cannot create directory

edit

The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.


On all pages of the wiki when enabling the AWS extension I get this error at the very bottom of the page;

Exception encountered, of type "Aws\Common\Exception\InvalidArgumentException"

And then when attempting to upload I get this;

Could not create directory ‘mwstore://AmazonS3/local-public/2/26’.

My config is in this format;

require_once("$IP/extensions/AWS/AWS.php");

$wgFileBackends['s3'] = array(

    'name'        => 'AmazonS3',

    'class'       => 'AmazonS3FileBackend',

    'lockManager' => 'nullLockManager',

    'awsKey'      => '****',

    'awsSecret'   => '****',

    'awsRegion'   => 'eu-west-1',

    'containerPaths' => array(

        'wiki_id-local-public'  => 'https://publicbucketname.s3-website-eu-west-1.amazonaws.com/',

        'wiki_id-local-thumb'   => 'https://thumbbucketname.s3-website-eu-west-1.amazonaws.com/',

        'wiki_id-local-deleted' => 'https://deletedbucketname.s3-website-eu-west-1.amazonaws.com/',

        'wiki_id-local-temp'    => 'https://tempbucketname.s3-website-eu-west-1.amazonaws.com/',

    )

);

$wgLocalFileRepo = array(

    'class'           => 'LocalRepo',

    'name'            => 'local',

    'backend'         => 'AmazonS3',

    'scriptDirUrl'    => $wgScriptPath,

    'scriptExtension' => $wgScriptExtension,

    'url'             => $wgScriptPath . '/img_auth.php',

    'zone'            => array(

        'public'  => array( 'container' => 'public' ),

        'thumb'   => array( 'container' => 'thumb' ),

        'temp'    => array( 'container' => 'temp' ),

        'deleted' => array( 'container' => 'deleted' )

    )

); KraizeeM (talk) 15:26, 1 December 2015 (UTC)Reply

try this:
$wgAWSRegion = 'eu-west-1';
$wgFileBackends['s3']['containerPaths'] = array(
    'wiki_id-local-public' => 'publicbucketname,
    'wiki_id-local-thumb' => 'thumbbucketname',
    'wiki_id-local-deleted' => 'deletedbucketname',
    'wiki_id-local-temp' => 'tempbucketname'
);
// Make MediaWiki use Amazon S3 for file storage.
$wgLocalFileRepo = array (
    'class'             => 'LocalRepo',
    'name'              => 'local',
    'backend'           => 'AmazonS3',
    'scriptDirUrl'      => $wgScriptPath,
    'scriptExtension'   => $wgScriptExtension,
    'url'               => $wgScriptPath . '/img_auth.php',
    'zones'             => array(
        'public'  => array( 'url' => 'http://publicbucketname.s3-eu-west-1.amazonaws.com/' ),
        'thumb'   => array( 'url' => 'http://thumbbucketname.s3-eu-west-1.amazonaws.com/' ),
        'temp'    => array( 'url' => 'http://tempbucketname.s3-eu-west-1.amazonaws.com/' ),
        'deleted' => array( 'url' => 'http://deletedbucketname.s3-eu-west-1.amazonaws.com/' )
    )
);
Karlpietsch (talk) 04:09, 3 December 2015 (UTC)Reply
Where should the key/secret go with that? KraizeeM (talk) 09:13, 3 December 2015 (UTC)Reply
just add
$wgAWSCredentials = array(
    'key' => 'yourkey',
    'secret' => 'yoursecret'
); Karlpietsch (talk) 01:13, 4 December 2015 (UTC)Reply
If I change the config to that I get this error;
Warning: Cannot modify header information - headers already sent by (output started at /var/www/html/wiki/includes/OutputPage.php:2322) in /var/www/html/wiki/includes/WebResponse.php on line 37 KraizeeM (talk) 08:26, 4 December 2015 (UTC)Reply
KraizeeM (talk) 09:13, 3 December 2015 (UTC)Reply
The discussion above is closed. Please do not modify it. No further edits should be made to this discussion.

Invalid queue name for Semantic Jobs

edit

The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.


If you have semantic media wiki enabled there is a queue named SMW\UpdateJob the backslash is invalid in SQS queue names so you will need to change the code to normalise the queue names and replace the backslash with something else

MWException from line 116 of /var/www/html/w/extensions/AWS/sqs/JobQueueAmazonSqs.php: Amazon SQS error: Error: Can only include alphanumeric characters, hyphens, or underscores. 1 to 80 in length Karlpietsch (talk) 04:03, 3 December 2015 (UTC)Reply

The discussion above is closed. Please do not modify it. No further edits should be made to this discussion.

Is AWS still a valid extension?

edit

The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.


The latest topics in this discussion are at least a year old.  Does that mean that there is a better solution than using this extension? Ahancie (talk) 21:03, 16 November 2016 (UTC)Reply

Do you only need the Amazon S3 part? (to store images in S3)
  1. I have a stable fork of S3-related functionality of Extension:AWS. It has been used in production for more than a year. If you want, I can send you the current state of the code.
  2. I did send patches to Extension:AWS, but some very important bugfixes which make it stable (e.g. https://gerrit.wikimedia.org/r/#/c/255534/ ) are sadly not yet merged into Extension:AWS.
  3. Unfortunately, the maintainer seems to be unresponsive, and I can't contact him.
I don't know what to do here. Edward Chernenko (talk) 00:55, 29 December 2016 (UTC)Reply
For the time being, please use https://github.com/edwardspec/mediawiki-aws-s3-stable-fork Edward Chernenko (talk) 23:58, 11 January 2017 (UTC)Reply
Thank you for sharing this with me. Ahancie (talk) 22:40, 21 February 2017 (UTC)Reply
@Edward Chernenko Do you probably want to publish the extension/fork on mediawiki.org and probably also host it in gerrit (which would bringt the benefit, that core-developers can look for it when removing deprecated code, which may be used by your extension :) Florianschmidtwelzow (talk) 16:34, 5 September 2017 (UTC)Reply
Sure, why not.
Will create the page for it (and move it to Gerrit) in a week or so. Edward Chernenko (talk) 17:35, 5 September 2017 (UTC)Reply
The discussion above is closed. Please do not modify it. No further edits should be made to this discussion.

Proposal to archive this extension

edit

The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.


Hello. I've proposed in https://phabricator.wikimedia.org/T174864 to archive the extension given that nobody seems to be actively mantaining it. Best regards. —MarcoAurelio (talk) 11:20, 4 September 2017 (UTC)Reply

Might as well James Martindale (talk) 17:10, 4 September 2017 (UTC)Reply
The discussion above is closed. Please do not modify it. No further edits should be made to this discussion.

JSON for IAM Policy update

edit

Been looking at this and it seems the JSON for the IAM role isn't correct anymore. Maybe amazon changed their grammar policy since the original entry this is what I have got to:

{
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "s3:*",
            "Resource": "arn:aws:s3:::<something>/*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:Get*",
                "s3:List*"
            ],
            "Resource": "arn:aws:s3:::<something>"
        }
    ]
}

HyverDev (talk) 07:35, 25 November 2018 (UTC)Reply

Nothing changed. The example in the article was always supposed to be inserted into the Statement array. This is not a "replace IAM inline policy with this" example, because IAM inline policy may already exist (and contain other rules that shouldn't be overwritten). Edward Chernenko (talk) 09:46, 25 November 2018 (UTC)Reply
I think the OP's sentiment is valid. Not sure why the documentation doesn't include this. It would make the setup less confusing to deal with for those who are new to S3/IAM. Jeffrey Wang 12:25, 21 April 2021 (UTC)Reply
Upon inspection, it did, but as a citation. Since this is very important, I've taken it out of a footnote. Jeffrey Wang 12:27, 21 April 2021 (UTC)Reply
Is there an example file that will work for someone who created a brand-new bucket just for this?
I'm asking about this due to hearing about increased security issues regarding AWS, and I want to keep things locked down while still enabling regular use of Mediawiki.
[Edited to make things more clear and reduce confusion] DiscordiaChaos (talk) 03:56, 9 August 2021 (UTC)Reply
@DiscordiaChaos The above JSON, in its exact form (apart from the ARN needing to be filled in), should be safe. Jeffrey Wang 07:23, 9 August 2021 (UTC)Reply

Can an existing bucket be updated with new images from an /images folder?

edit

We have been using this extension with success in our non-production wiki, although production still uses the original /images folder on its server filesystem. Now that we are preparing to promote our non-prod system to production, we need a way to trigger this extension's "copy files from /images to s3 bucket" process with the new images that have been uploaded into our production system over the last few weeks. Ideally, it would be nice if it recognized images that already exist in the bucket and only copies new ones over there, but we would be fine with clearing the bucket and triggering another full load, similar to what happened the first time this extension was installed/ran.


I've replaced the old /images folder with a new one containing a few new images, but the AWS extension doesn't seem to be noticing them or uploading them to s3. Is this possible to trigger manually?


Thank you for maintaining this extension! We're using RDS now as well, and are very excited to have stateful data removed from our server to take advantage of better failover/autoscaling. 164.144.55.1 (talk) 17:50, 11 February 2019 (UTC)Reply

You should copy these files from local directory to S3 bucket manually, e.g. via "aws s3 sync" command from AWS CLI. When this extension is enabled, the local directory is completely ignored by MediaWiki, so images in local directory won't be detected or automatically moved. Edward Chernenko (talk) 02:19, 12 February 2019 (UTC)Reply
Thanks! I can try that.
Quick follow up question:
Will this handle thumbnails properly? In the local /images dir, thumbnail file paths appear to be separated into numbered subfolders with more nested numbered subfolders like this
thumb
| - 1
| -- 10
| --- chart.jpg
| ---- 121px-chart.jpg
| ---- 161px-chart.jpg
| ---- 180px-chart.jpg
| ---- 300px-chart.jpg
| --- cat.jpg
| ---- 121px-cat.jpg
| ---- 161px-cat.jpg
| ---- 180px-cat.jpg
| ---- 300px-cat.jpg
Does the extension perhaps rework the thumbnail creation script to create these thumbnails from images after they end up in s3? (In which case I wouldn't need to worry about uploading thumbnails at all, and I would just aws s3 sync the original images and let the thumbnail creation process happen on its own). 164.144.55.1 (talk) 15:59, 12 February 2019 (UTC)Reply
Sorry about that formatting. Visual editor didn't maintain my indents in those code block lines. 164.144.55.1 (talk) 16:00, 12 February 2019 (UTC)Reply
You should set
$wgAWSRepoHashLevels = 2; $wgAWSRepoDeletedHashLevels = 3;
to maintain the same naming (with 1/10/cat.jpg instead of just cat.jpg) as in the local directory. Edward Chernenko (talk) 01:09, 13 February 2019 (UTC)Reply

Amazon EFS drive and mount it to $wgUploadDirectory.

edit

you mention " Instead of using Amazon S3 (and this extension), you can create an Amazon EFS drive and mount it to $wgUploadDirectory. It's recommended for small wikis".

I have created an EFS drive and mounted it to my instance where the wikis is being run.

But how can I make this EFS drive the uploadDirectory in the LocalSettings? I have a DNS for this EFS but that's it?

Thanks Tristan Donxello (talk) 18:17, 26 April 2019 (UTC)Reply

Just mount it to /path/to/your/mediawiki/images, that's it. Edward Chernenko (talk) 15:18, 27 April 2019 (UTC)Reply
This time it worked, the other day not.
We run several WIKIs and your extension is great but will you maintain it or what is the plan in the future?
Anyway, highly appreciate your work so far!
Un abrazo. Donxello (talk) 00:32, 28 April 2019 (UTC)Reply
1) If you use EFS, you don't need this extension. Mounting EFS works with MediaWiki out-of-the-box.
2) No particular plans (I don't use it on my own wikis). But feature-wise, this extension has 99% of what's needed, plus it's covered with automated tests and all. Edward Chernenko (talk) 11:44, 29 April 2019 (UTC)Reply

Has anyone gotten this to work with $wgUseSharedUploads?

edit

I have a wiki-family setup and I installed this extension on my shared media repository (similar to commons.wikimedia.org), however, none of the other wiki's (for example, en.wikipedia, fa.wikipedia) could generate thumbnails on demand anymore. Images and thumbnails that existed before the migration loaded from s3 without a problem. Thumbnails generate in s3 when the media repository wiki is the one making the request, just not the other wiki's in the family make the request. These are the shared upload settings I'm using:


$wgUseSharedUploads = true;

//$wgSharedUploadPath = 'https://example.com/images'; #old setting

$wgSharedUploadPath = 'https://examplebucket.s3.amazonaws.com'; #new setting

$wgHashedSharedUploadDirectory = true;

$wgSharedUploadDirectory = "images";

$wgFetchCommonsDescriptions = true;

$wgSharedUploadDBname = 'example_wikidbcommons';

$wgSharedUploadDBprefix = '';

$wgRepositoryBaseUrl = "https://example.com/File:";


This is the equivalent code using wgForeignFileRepos

$wgForeignFileRepos[] = [

        'class' => 'ForeignDBRepo',

        'name' => 'mywiki',

        //'url' => "https://example.com/images", #old

        'url' => "https://examplebucket.s3.amazonaws.com", #new

        'directory' => 'images',

        'hashLevels' => 2, // This must be the same for the other family member

        'dbType' => $wgDBtype,

        'dbServer' => $wgDBserver,

        'dbUser' => $wgDBuser,

        'dbPassword' => $wgDBpassword,

        'dbFlags' => DBO_DEFAULT,

        'dbName' => 'example_wikidbcommons',

        'tablePrefix' => '',

        'hasSharedCache' => false,

        'descBaseUrl' => 'https://example.com/File:',

        'fetchDescription' => true

];

I'm not sure what else I should be changing, $wgSharedUploadPath says "Thumbnails will also be looked for and generated in this directory." T0lk (talk) 08:43, 6 November 2019 (UTC)Reply

I was able to resolve the problem. The solution is to modify the $wgForeignFileRepos I posted above as follows:
Change: 'name' => 'local',
Add: 'backend' => 'AmazonS3', T0lk (talk) 02:32, 7 November 2019 (UTC)Reply

Can this extension also store other files except from media files to AWS?

edit

The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.


Hi, I would like to have mediawiki store uploaded datasets, csv files, and zip files (and in general any file) to AWS. Can I do this with this extension, or is it only for images (and videos?) Thanks MavropaliasG (talk) 04:54, 12 December 2019 (UTC)Reply

If you can upload it to your wiki, this extension will put that file on s3. Does that help? T0lk (talk) 06:29, 12 December 2019 (UTC)Reply
Thank you @T0lk, so this extension puts ALL uploads, regardless of their file type on s3? MavropaliasG (talk) 06:35, 12 December 2019 (UTC)Reply
Yes. This works for all uploads (from Special:Upload) Ciencia Al Poder (talk) 10:21, 12 December 2019 (UTC)Reply
Thank you for the reply @Ciencia Al Poder. Can I somehow also integrate it with the upload through visual editor? (i.e. when you edit a page with visual editor, and you press Insert > Media > Upload ? MavropaliasG (talk) 15:30, 12 December 2019 (UTC)Reply
AFAIK, it affects *all uploads* to the local wiki, no matter how they're uploaded (special:upload was an example), since this is the common repository for the wiki, and there's no way to choose between file repositories on upload Ciencia Al Poder (talk) 21:18, 12 December 2019 (UTC)Reply
Thank you for the information , much appreciated. MavropaliasG (talk) 04:09, 13 December 2019 (UTC)Reply
The discussion above is closed. Please do not modify it. No further edits should be made to this discussion.

Broad IAM Permissions

edit

The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.


In my S3 bucket permissions I am trying to use the 4 "public access blocks". But when I do this MediaWiki cannot access the bucket and therefore the extension doesn't work.


Now given I am using an IAM policy I understand the "public access blocks" should not be an issue. I saw on an AWS video that an overly permissive IAM policy is considered public by AWS (I couldn't find this video as I was writing this), so I think that is what is happening here. This is surely overly-permissive. Has anyone had this issue?


The recommended IAM poilcy is:

"Action" "s3:*",

"Resource": "arn:aws:s3:::<something>/*",


"Action": [ "s3:Get*", "s3:List*" ],

"Resource": [ "arn:aws:s3:::<something>" ]


Would you agree the following permissions should work too (or am I missing needed permissions)?

"Action" [ "s3:ListObjects", "s3:GetObject", "s3:PutObject" ],

"Resource": "arn:aws:s3:::<something>/*",


"Action": [ "s3:GetObject", "s3:ListObjects" ],

"Resource": [ "arn:aws:s3:::<something>" ]


Thank you 148.252.132.218 (talk) 11:46, 7 October 2020 (UTC)Reply

  1. It's not overly permissive. You are meant to use a separate S3 bucket for images, and if the S3 bucket contains only images, then there is no extra security added by only permitting certain operations. (I mean, with PutObject alone a malicious user can delete all files by overwriting them with zeroes, and you can't really restrict PutObject. While the very point of minimizing permissions is to reduce the possible impact of such malicious user's actions)
  2. Currently the extension also does CopyObject (when image is moved and/or deleted), DeleteObject and alike. You can find all API calls that it uses by searching for client-> in the code.
  3. There is no guarantee that additional API calls won't be used in the future versions. (that is, if some future change requires the use of "GetObjectVersion", then its addition won't be considered breaking anything, as the permission recommended in README is Get* - but it will break setups where only GetObject is allowed). Edward Chernenko (talk) 13:34, 7 October 2020 (UTC)Reply
Understood, and thank you Edward for the prompt response.
The problem I am having is that using those permissions means that I cannot add the extra security layer which is to enforce the "4 public access blocks". While I appreciate that the risk and potential impact is low, I tend to like to increase the restrictions as much as possible.
I will add those permissions used in client-> calls only for the moment. 148.252.132.218 (talk) 20:47, 7 October 2020 (UTC)Reply
Question: what exactly do you mean by "4 public access blocks" in this context? What is mentioned on https://docs.aws.amazon.com/AmazonS3/latest/dev/access-control-block-public-access.html ? These IAM permissions specify permissions of an authorized AWS user (and/or IAM instance profile), they have nothing to do with public access. Edward Chernenko (talk) 21:36, 7 October 2020 (UTC)Reply
Yes, I was referring to those Public Access Policies in your link.
I used to have the Public Access Policies blocked for all S3 buckets both at an individual level and at a global level, as a security setting. Using this extension with the recommended setup makes me have to take off the global block and the block at an individual level for the bucket that is being used by the extension.
I couldn't find clear documentation to understand this issue specifically with regards to IAM, I did listen to a YouTube video where I recall someone from AWS said something along the lines of "we don't like IAM roles that have * so they might be considered public". And when I remove the public access policies block, the extension works correctly. Therefore it seems to me that there is something in my setup that is being 'considered public'.
Other IAM and Bucket Policies that I am using are not creating this issue on other buckets. For example, if I turn on the public access policies block on the bucket being used by MediaWiki, Cloudfront will still serve images from that particular S3 bucket. So Cloudfront, ironically, has no 'public permissions' despite effectively making the entire bucket publicly readable via serving all of its contents through a subdomain...
On the link you provided, there is a section that says "The meaning of 'public'", which talks about the issue of granting * related to some elements in the Bucket Policies.
Its really a pain, but to keep the desired security setting of globally denying all Public Access Policies for S3 buckets I would like to find a solution.
I was planning on doing some testing with 'more restricted permissions' to see if that solves it. Maybe restricting the 'Actions' or maybe restricting access to my VPC only for instance? 85.255.233.161 (talk) 22:52, 8 October 2020 (UTC)Reply
The only reason why your S3 buckets are considered "public" is because any visitor of non-private wiki can see/download the images that you have uploaded into it. This is by design (it's supposed to be public). It has nothing to do with IAM permissions that you mentioned above.
If you have a private wiki, then Extension:AWS marks all uploaded S3 objects with "private" ACL, so they are not accessible regardless of what you write in IAM.
In short, you are trying to solve a nonexistent problem. Since it has nothing to do with this extension, I can not provide further support on this matter. Edward Chernenko (talk) 00:19, 9 October 2020 (UTC)Reply
[Note: _dot_ and _colon_ are used below to circumvent the ⧼abusefilter-warning-linkspam⧽]
It is not related to the images being accessible by any non-private wiki viewer. As I mention above:
"Cloudfront, ironically, has no 'public permissions' despite effectively making the entire bucket publicly readable via serving all of its contents through a subdomain..."
Why is the fact that any visitor can view/download the images not conflicting with the S3 buckets not being considered "public"? Because of the following configuration:
  1. set $wgAWSBucketDomain = 'img_dot_mysite_dot_com'; as indicated in the extension's readme
  2. set Cloudfront to serve the images from my bucket to 'img_dot_mysite_dot_com'
  3. set a Bucket Policy in the S3 bucket to allow Cloudfront to serve the images as follows:
{
   "Version": "2008-10-17",
   "Id": "PolicyForCloudFrontPrivateContent",
   "Statement": [
       {
           "Sid": "1",
           "Effect": "Allow",
           "Principal": {
               "AWS": "arn:aws:iam::cloudfront:user/XXXXXXXXXXXXX"
           },
           "Action": "s3:GetObject",
           "Resource": "arn:aws:s3:::XXXXXXXXXXXXXXX/*"
       }
   ]
}
Here the buckets are not considered public with regards to the "4 public access blocks" for S3 but Cloudfront is granted access through the bucket policy. That read access is very well isolated from any other S3 type access, e.g. Put. Additionally it adds the extra layer of security of having the 4 public blocks enabled. This is our desired security setting.
And if I use the extension with "4 public access blocks", viewing the images in the wiki is not a problem. Any non-private wiki viewer can view the images given they are being served by AWS Cloudfront.
The problem comes when the EC2 server tries to write to the bucket. Why? it seems to me because the IAM policy is "considered public" by S3, and therefore I get an error such as the following:
Warning: doCreateInternal: S3Exception: Error executing "PutObject" on "https_colon_//s3_dot_amazonaws_dot_com/XXXXXXXXXX/thumb/XXXXXXXXXXXXXXXXX.jpg/120px-XXXXXXXXXXXXXXXXX.jpg"; AWS HTTP error: Client error: `PUT https_colon_//s3_dot_amazonaws_dot_com/XXXXXXXXXXXXXXXXX/thumb/XXXXXXXXXXXXXXXXX.jpg/120px-XXXXXXXXXXXXXXXXX.jpg` resulted in a `403 Forbidden` response: AccessDeniedAccess DeniedXXXXXX (truncated...) AccessDenied (client): Access Denied - AccessDeniedAccess Denied XXXXXXXXXXXXXXXXX/XXXXXXXXXXXXXXXXX/XXXXXXXXXXXXXXXXX/seo= in /var/www/html/w/extensions/AWS/s3/AmazonS3FileBackend.php on line 1117 85.255.233.161 (talk) 11:43, 9 October 2020 (UTC)Reply
The IAM is not too broad.
The reason you are getting that error shouldn't be because the IAM policy is considered public by S3 but likely because the extension is trying to Put with ACL = public-read:
e.g. AmazonS3FileBackend.php:347: 'ACL' => $this->isSecure( $container ) ? 'private' : 'public-read',
Can you try the following in LocalSettings.php?
$wgFileBackends['s3']['privateWiki'] = true; Kris Ludwig (talk) 22:07, 9 October 2020 (UTC)Reply
Seems to work.
Thank you! 185.69.145.145 (talk) 09:23, 10 October 2020 (UTC)Reply
The discussion above is closed. Please do not modify it. No further edits should be made to this discussion.

S3-compatibles also work?

edit

Does this extension also allow use of API-compatible backends like MinIO, Linode Object Storage, etc.? 142.162.230.68 (talk) 00:34, 3 January 2021 (UTC)Reply

It does (see the README file for examples). Edward Chernenko (talk) 22:59, 4 January 2021 (UTC)Reply

What is master here?

edit

The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.


This seeks to clarify this diff since I fail to understand your edit comment. Note that MediaWiki master is now at 1.37+. Your change means that the extension caters for MW 1.35, 1.36 and 1.37? Not sure if this is meant but it will be cool if this is correct I believe. [[kgh]] (talk) 16:39, 12 May 2021 (UTC)Reply

The extension itself uses "master" compatibility policy, meaning that it maintains backward compatibility with MediaWiki 1.35.
Its repository has REL1_34 branch that supports 1.27-1.34. Edward Chernenko (talk) 21:54, 12 May 2021 (UTC)Reply
Cool, thanks for clarifying. [[kgh]] (talk) 14:16, 13 May 2021 (UTC)Reply
The discussion above is closed. Please do not modify it. No further edits should be made to this discussion.

PDF support lacking

edit

Doesn't seem to work very well with the pdf thumbnail pages generated by default by mediawiki, this is probably just a function of the realities of generating of a pdf for a file on an s3 store just don't make a lot of sense and probably should be processed as a batch job than done on the fly like configured by default. 47.36.146.194 (talk) 18:28, 23 July 2021 (UTC)Reply

Who should be using this extension?

edit

What kind of wiki is this good for? 65.92.83.38 (talk) 04:46, 1 November 2021 (UTC)Reply

Wikis in such environments [[kgh]] (talk) 07:04, 1 November 2021 (UTC)Reply

Could not write file "mwstore://AmazonS3/local-public/...

edit

(Also posted in GitHub issues for this repo)

I recently started getting these errors and I am struggling to figure out why.

Nothing has changed in my AWS configuration. The IAM configuration is still good and all of the bucket settings have not changed.

I am on PHP 7.4, MediaWiki 1.35, Extension:AWS 0.11.1. This hasn't really changed either.

I did recently update my composer dependencies. Per the MediaWiki documentation I removed my composer.lock file and ran composer install

Files are still being read from the bucket correctly.

Does anyone have troubleshooting suggests or know what the issue is?

I verified that the AWS credentials I am using are still working correctly. I also tried using the latest code from the extension's repo.

To be clear, this was working just fine a few weeks ago and the only thing that has changed since then was that I updated the composer dependencies and I enabled the VisualEditor functionality.

Here is the error I am seeing in the debug logs (some information obfuscated):

<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>NM94VF (truncated...)
 AccessDenied (client): Access Denied - <?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>NM94VF*******</RequestId><HostId>u6uU*************************************************</HostId></Error>
[error] [de39d4fe79d16409eda7a6cf] /wiki/Special:Upload   ErrorException from line 1104 of /var/www/html/extensions/AWS/s3/AmazonS3FileBackend.php: PHP Warning: doCreateInternal: S3Exception: Error executing "PutObject" on "******/Shopify_Photoshop_Actions.atn.zip"; AWS HTTP error: Client error: `PUT *******/Shopify_Photoshop_Actions.atn.zip` resulted in a `403 Forbidden` response:
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>NM94VF (truncated...)
 AccessDenied (client): Access Denied - <?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>NM94VF*******</RequestId><HostId>u6uU*************************************************</HostId></Error>
#0 [internal function]: MWExceptionHandler::handleError()
#1 /var/www/html/extensions/AWS/s3/AmazonS3FileBackend.php(1104): trigger_error()
#2 /var/www/html/extensions/AWS/s3/AmazonS3FileBackend.php(1031): AmazonS3FileBackend->logException()
#3 /var/www/html/extensions/AWS/s3/AmazonS3FileBackend.php(347): AmazonS3FileBackend->runWithExceptionHandling()
#4 /var/www/html/extensions/AWS/s3/AmazonS3FileBackend.php(369): AmazonS3FileBackend->doCreateInternal()
#5 /var/www/html/includes/libs/filebackend/FileBackendStore.php(187): AmazonS3FileBackend->doStoreInternal()
#6 /var/www/html/includes/libs/filebackend/fileop/StoreFileOp.php(74): FileBackendStore->storeInternal()
#7 /var/www/html/includes/libs/filebackend/fileop/FileOp.php(301): StoreFileOp->doAttempt()
#8 /var/www/html/includes/libs/filebackend/FileOpBatch.php(176): FileOp->attempt()
#9 /var/www/html/includes/libs/filebackend/FileOpBatch.php(132): FileOpBatch::runParallelBatches()
#10 /var/www/html/includes/libs/filebackend/FileBackendStore.php(1308): FileOpBatch::attempt()
...

And here are the versions Composer is using for the extension's dependencies:

  - Locking aws/aws-sdk-php (3.209.17)
  - Locking composer/installers (v1.12.0)

Ajmichels (talk) 00:10, 4 February 2022 (UTC)Reply

The issue is that my Wiki and bucket are private and I did not have $wgFileBackends['s3']['privateWiki'] = true; in my local settings. I am still not sure yet how this was working before and then stopped but... it is working now.
Thanks to Edward for helping me figure it out on GitHub. Ajmichels (talk) 22:28, 4 February 2022 (UTC)Reply

Could not write file "mwstore://AmazonS3/local-public/xx.jpg"

edit

I'm getting the above error. can anyone assist? Here is my LocalSettings (scrubbed for privacy)

$wgFileBackends['s3'];

wfLoadExtension( 'AWS' );

// Configure AWS credentials.

// THIS IS NOT NEEDED if your EC2 instance has an IAM instance profile.

$wgAWSCredentials = [

   'key' => xx,

   'secret' => 'xxx',

   'token' => false

];

$wgAWSRegion = 'us-east-1'; # Northern Virginia

// Replace <something> with the name of your S3 bucket, e.g. wonderfulbali234.

$wgAWSBucketName = "xxx";


and this is the policy we have


"Statement": [

{

"Effect": "Allow",

"Action": "s3:*",

"Resource": "arn:aws:s3:::<bucketname>*"

},

{

"Effect": "Allow",

"Action": [

"s3:Get*",

"s3:List*"

],

"Resource": "arn:aws:s3:::<bucketname>"

}

] Waterlooglass (talk) 19:10, 21 August 2023 (UTC)Reply

I removed the credentials from our LocalSettings file and tried to just use our IAM and now I'm getting this error
[746bd41fcda522fdafb85fb8] /wiki/Special:Upload Aws\Exception\CredentialsException: Error retrieving credentials from the instance profile metadata service. (cURL error 28: Connection timed out after 1001 milliseconds (see https://curl.haxx.se/libcurl/c/libcurl-errors.html) for http://169.254.169.254/latest/meta-data/iam/security-credentials/) Waterlooglass (talk) 20:01, 21 August 2023 (UTC)Reply
> Connection timed out after 1001 milliseconds
This looks like a firewall is blocking a connection, or some URL is set incorrectly Ciencia Al Poder (talk) 20:43, 21 August 2023 (UTC)Reply

Does the bucket need to be public?

edit

Hello, I'm working with a private wiki and a private S3 bucket. To display the S3 images in the wiki, does the bucket need read access? Or can I use the extension while keeping everything private? Thanks! 161.0.161.26 (talk) 17:17, 3 January 2024 (UTC)Reply

It doesn't need read access. Private wikis serve images via /img_auth.php, not directly. Edward Chernenko (talk) 10:46, 5 January 2024 (UTC)Reply

Cloudflare R2

edit

Anyone has done it using Cloudflare R2?

It's almost similar to those other cloud, but it seems to be simpler. Song Ngư (talk) 12:00, 22 May 2024 (UTC)Reply

Write existing files to S3 using extension

edit

Has anyone written existing files from /images folder to s3, I have setup AWS extension recently and as i am uploading image, it is getting written to s3 storage now. But I have pre-existing images that i want to serve from s3 instead of images folder.

Also, it seems that mediawiki is storing images in the /images folder as well as in S3, afraid the storage limit will reach for the volume as it is storing in both volume and s3. how to avoid storing in volume and serve everything from s3? Pyhotshot (talk) 16:56, 9 August 2024 (UTC)Reply

To copy existing files to S3 you can use something like s3cmd or rclone. The extension doesn't have anything built in to do that transfer.
If it's writing to S3 correctly for new uploads, it shouldn't also be uploading those to the local images directory. Sam Wilson 05:44, 10 August 2024 (UTC)Reply
Currently it does upload to both, there is a copy of images in EBS and uploaded to s3 in 3 different sizes(not sure if that is expected), what is the setting to use S3 only for the images? 205.174.22.25 (talk) 13:55, 12 August 2024 (UTC)Reply
Hello @Samwilson, You are right, it is not uploading to local images directory. I had it successfully upload images to s3 bucket. But I have few other unresolved questions/issues.
  1. I was able to mv existing images from images/ directory to s3, but how does mediawiki know the new address(s3 bucket) to serve those images from s3 instead of local dir.
  2. S3 images are not being served at all, what config should i set in LocalSettings.php to let mediawiki know that images should be served from s3 for both existing images and new images. Pyhotshot (talk) 13:05, 14 August 2024 (UTC)Reply
@Pyhotshot: Oh, sorry, I assumed that when you said you'd got it working to upload to S3 that it was also serving correctly from there. So you mean it uploads to the right place, but isn't serving anything? What URL is being produced for the images (i.e. have a look at the HTML at the img src attribute)? That should give you a clue as to which bit is going wrong. What values do you have set for $wgAWSRegion, $wgAWSBucketName, $wgAWSBucketDomain, and $wgImgAuthPath? Sam Wilson 02:21, 15 August 2024 (UTC)Reply
Yes, the images are not being served.
Here are the settings, Apart from these there are no other settings related to AWS S3.
wfLoadExtension( 'AWS' );
$wgAWSRegion = 'us-east-1'; # Northern Virginia
$wgAWSBucketName = "wonderfulbali234";
$wgArticlePath = "/wiki/$1";
$wgUsePathInfo = true; Pyhotshot (talk) 13:29, 15 August 2024 (UTC)Reply
https://<domainname>/wiki/File:Something1.jpg this is how it is showing in inspect html of the wiki page. saying "Error creating thumbnail: File missing" . The s3 path doesn't have wiki/ folder to it. <bucketname>/7/78/<somename>.jpg. The bucket prefix is similar to what was created by mediawiki to write to images/ folder ( autogenerated by mediawiki). is it expecting wiki/ folder to be in the bucket also? Pyhotshot (talk) 13:37, 15 August 2024 (UTC)Reply

Access denied: Writing thumbnail to s3

edit

Hi, I am running into an s3 permission issue when mediawiki is trying to PUT thumbnail to s3 using https url. Mediawiki container role has full access to s3, when i upload an image, it is getting uploaded to s3://<bucket>/name.jpg. But when it is trying to read back, it is trying to create a thumbnail and PUT in s3 thumb/ dir, but failing to do so.

How do i let mediawiki upload using a pre-signed s3 url(the way it is trying to download), to upload to thumb/ dir. From the below logs it is clearly trying to PUT to https bucket path, but it is not a signed URL, my s3 bucket only accepts signed https request. Please help!

2024-08-19 19:36:26 mediawiki-7dc7d47f89-nbj86 mediawikidb: S3FileBackend: doPrepareInternal: S3 bucket wonderfulbali8567, dir=thumb/Husky1.jpg, params=dir

2024-08-19 19:36:26 mediawiki-7dc7d47f89-nbj86 mediawikidb: S3FileBackend: isSecure: checking the presence of thumb/.htsecure in S3 bucket wonderfulbali8567

2024-08-19 19:36:26 mediawiki-7dc7d47f89-nbj86 mediawikidb: S3FileBackend: doCreateInternal(): saving thumb/Husky1.jpg/120px-Husky1.jpg in S3 bucket wonderfulbali8567 (sha1 of the original file: cwyxvni7t03ivhv6worr9duqucn8pyr, Content-Type: image/jpeg)

2024-08-19 19:36:26 mediawiki-7dc7d47f89-nbj86 mediawikidb: S3FileBackend: exception AccessDenied in createOrStore from PutObject (false): Error executing "PutObject" on "https://wonderfulbali8567.s3.amazonaws.com/thumb/Husky1.jpg/120px-Husky1.jpg"; AWS HTTP error: Client error: `PUT https://wonderfulbali8567.s3.amazonaws.com/thumb/Husky1.jpg/120px-Husky1.jpg` resulted in a `403 Forbidden` response:

<?xml version="1.0" encoding="UTF-8"?>

<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>FPSPG8 (truncated...)

AccessDenied (client): Access Denied - <?xml version="1.0" encoding="UTF-8"?>

<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>FPSPG8T46ABA0RG4</RequestId><HostId>QGbDRrFC20ZqZwllbnCB/M96zukfrbEi/cdSQNG7DF+MEjjMfIHf5I5VI0i1uplA+p5jTPwVb0M=</HostId></Error> Pyhotshot (talk) 20:18, 19 August 2024 (UTC)Reply

File not found Although this PHP script (/img_auth.php) exists, the file requested for output (mwstore://AmazonS3/local-public/Pipe_header.jpg) does not.

edit

I have followed all the steps, the extension is loaded but i am getting 500 error i don't know why ..

Maybe i might have not configured right ??

i have first created an inline policy to allow all the permissions for bucket Hello and then i have created IAM role and attached that inline policy to that and then attached that IAM role to the EC2 on which mediawiki is running

My s3 bucket is hello and it has a folder inside it called helloworld and in that folder i have all the images so my local settings should be like this right ....???

FILE STRUCTURE: hello/helloworld/....(all the images)

wfLoadExtension( 'AWS' );

$wgAWSBucketName = "hello";

$wgAWSBucketTopSubdirectory = "/helloworld";


Help I am new to this LuciferVN (talk) 19:05, 28 August 2024 (UTC)Reply

Return to "AWS" page.