Talk:Requests for comment/Reducing image quality for mobile

Latest comment: 10 years ago by Sharihareswara (WMF) in topic IRC discussion 11 June

Impact on infrastructure

edit

Have in mind that depending on how you'll implement this, you may significantly increase imagescaler/Swift/upload CDN requests and storage requirements for Swift. I'm not saying to not do it, but I'd like to see some risk assessment & load estimates in the RFC itself. I'd also very much prefer for this to be blocked on the implementation of the simplify thumbnail cache RFC, as it may alleviate half of my concerns (but not all). Faidon Liambotis (WMF) (talk) 01:23, 19 March 2014 (UTC)Reply

Added some basic guestimations and reasoning. Please provide some basic swift stats (like how many images have actually been scaled down, how many different versions of a single image do we currently have on average, and what is their size vs original) to have better statistics. Having varnish-based thumbnail cache would obviously be a great improvement, but I think we should treat them as two independent projects, unless we know for sure that estimated image-size increase cannot be handled by our current infrastructure. --Yurik (talk) 04:44, 19 March 2014 (UTC)Reply
Your guesstimations are very wrong, your numbers are off by about 300-600x. We currently have 24.148.337 originals taking up 40.9TB. These, have 302.195.630 corresponding thumbnails, taking up 19.8TB and growing. They are consuming space in the Swift backends with replica count 3 (i.e. 60TB), as well as on the SSD disks of backend caches (eqiad, esams, ulsfo), in the pagecache of Swift backends & Varnish backends, and in memory of Varnish frontends (yes, it's wasteful, that's why we have been discussing it over at the simplification RFC). Finally, note that a lot of the scalability problems arise from the count of files that we keep, not from their aggregate size, a dimension that you haven't considered at all -- just imagine how different it is to handle e.g. 10 files of 2G each vs. 10 million files of 2K each. If this proposal is to double the amount of thumbnails that we keep, I'm afraid that it's going to need serious ops & platform work with many months of work needed to make significant improvements to the architecture and it would definitely need to be blocked on the simplified cache RFC. —The preceding unsigned comment was added by Faidon Liambotis (WMF) (talkcontribs.
Thanks for the numbers, let me try to go through them. 25 million images turn into 300 million thumbnails. That's 12 per 1. There are automatic gallery size (shown on category pages in commons), and all image pages have these options on the file page: 800×600, 320×240, 640×480, 1024×768, 1280×960, 2048×1536 (assuming the image is larger than those options). Any dumb crawler that simply follows all URLs would trigger image scaling. Also, when we generate HTML, the srcset attribute automatically adds 1.5x and 2x options, tripling thumbnail count (does not happen for the gallery or file page, only for articles.
So if we assume that 12 consists of ~5 preset options on the file/category pages plus 1 original, what remains are 2 article usages (x3 because of srcset). Mobile javascript would only replace those 2 usages (removing srcset attribute and ignoring pages in file namespace), so we end up with 50 million extra pictures 5-10KB each - 250-500GB (twice the size of my original calc). In other words, we are looking at about half a terabyte growth in disk space and ~15% growth in the number of files. Hope my calculations make sense and I did not miss anything major. --Yurik (talk) 16:00, 21 March 2014 (UTC)Reply

Authors shouldn't have to worry about compression settings

edit

The idea to have authors specify a quality setting, unless I'm fundamentally misunderstanding the intent, seems very misguided. We should write our software to take performance and efficiency considerations into account, not shift that responsibility to authors. If indeed images should be delivered at a higher compression factor / lower quality in certain use cases, let's identify those use cases, and go as far as we can without introducing markup to specify compression settings before even thinking about doing so.--Eloquence (talk) 02:26, 19 March 2014 (UTC)Reply

This feature is not for authors, but for internal/advanced use - when we are serving image to a mobile device on mobile network, the goal is to automatically reduce image quality to reduce wait time. Additionally, this might also benefit users who pay for their data plan per MB, as it would allow us to create low/high quality mobile settings. --Yurik (talk) 04:44, 19 March 2014 (UTC)Reply
@Yurik: If it's not for authors, I fail to see why it should be added to the markup (and your proposal contradicts that statement: "[t]his parameter might be used by various template authors").--Eloquence (talk) 06:35, 19 March 2014 (UTC)Reply
Sorry, wasn't clear. This is MOSTLY for internal use, but there might be advanced authors out there who might decide to reduce image quality in addition to reducing pixel size for some obscure template. After all, if we provide rotation and scaling, why limit the toolset? And since the generated image URL is exposed to the world, why not do a well documented parameter as well? In any case, I am ok to not include it if the community is against it/there is a technical reason not to. --Yurik (talk) 07:16, 19 March 2014 (UTC)Reply
Image compression quality is a technical concern rather than an editorial one. It doesn't belong in the markup.--Eloquence (talk) 03:00, 20 March 2014 (UTC)Reply
Yeah, I'm struggling to understand why we would want to reduce image quality (very strange to say aloud...) manually rather than programmatically. I agree with your posts here. --MZMcBride (talk) 03:20, 20 March 2014 (UTC)Reply

Ok, for now the core patch 119661 does not let users specify quality reduction via image link param, but only via modifying the URL itself. --Yurik (talk) 17:41, 24 March 2014 (UTC)Reply

File insertion syntax

edit

File insertion syntax is already an abomination. It really shouldn't be extended any further. --MZMcBride (talk) 02:04, 20 March 2014 (UTC)Reply

The image with a different quality must have a different URL -- the image is not scaled by the initial request during HTML rendering, but rather it is processed on 404 by extracting needed parameters from the URL. The URL syntax is what seemed the most straightforward to me. How do you suggest we pass that information to the image processor? --Yurik (talk) 17:13, 20 March 2014 (UTC)Reply

+1 on MZ (and Erik in #Authors shouldn't have to worry about compression settings). --Nemo 14:45, 26 March 2014 (UTC)Reply

@Nemo bis: , are you proposing an alternative to changing URL for the image? --Yurik (talk) 17:13, 26 March 2014 (UTC)Reply
I have no opinion about that. MZ, Erik and I all commented on the image markup, i.e. what's written in wikitext. --Nemo 18:06, 26 March 2014 (UTC)Reply
Ok, so is @MZMcBride: ok with the URL change, or is there another option? This is NOT related to the above discussion about image link parameter, which has been removed. --Yurik (talk) 22:25, 27 March 2014 (UTC)Reply

Bikeshed

edit

"Reducing image quality" feels like a strange name to me. Perhaps the name of this RFC could be "Compressing images for mobile" or something like that? Just a suggestion. :-) --MZMcBride (talk) 03:22, 20 March 2014 (UTC)Reply

Renamed. --Yurik (talk) 19:51, 20 March 2014 (UTC)Reply

Fundamentals

edit

I'm unable to comment on this proposal, because it doesn't provide any background on how you reached your proposal. All we're provided as use case and background is one line: "Many mobile devices with low bandwidth or small screen/slow processor could benefit from showing JPEG images with reduced quality". All the main points seem to be given for granted: they shouldn't.

  • Why JPEG? We have lots of PNG thumbnails, are we sure those are not taking more bandwidth?
  • Why for mobile? If we can have quality degradations without noticeable problems, why not tweak the thumbnailing settings for everyone in core? Kiwix also compresses images a lot, for instance, and the quality is still rather good: worth exploring. (About 30 GB in addition to the 10 GB of text in the first and last full ZIM release.)
  • Why quality? Isn't it easier to change default thumbnail size on the mobile site, from 220px to something else we already have thumbs for, like 120px? --Nemo 14:45, 26 March 2014 (UTC)Reply
Thanks @Nemo bis: , I expanded the Rational section, hopefully that answers your question. --Yurik (talk) 17:12, 26 March 2014 (UTC)Reply
Thank you. I summarise that with "because we can [easily]". That's not particularly convincing, I must say, even though it may still be a good idea. On the alleged lack of alternatives:
@Nemo bis: , I am not against vips, but it seams the biggest benefit of it is execution efficiency - it runs much faster and consumes less memory. These are great qualities, but they are orthogonal to this RFC - I will be very happy if our scaler switches to a more efficient one, but getting it should be a separate issue. I ran stats against one day:
  • PNG: 2,199,455 / 18,749 MB / 8.7 KB/file
  • JPEG: 1,708,366 / 28,548 MB / 17 KB/file
So even though JPEGs are only 43% by count, they are 60% of total traffic, and more than twice the size. Targeting JPEGs seems to give more bang for the buck, without introducing a new backend scaler.
Method: ran zgrep '/commons/thumb/.*image/png' sampled-1000.tsv.log-20140325.gz, counted with -c, summed with cut -d$'\t' -f7 | awk '{s+=$1} END {printf "%.0f\n", s/1024/1024}'
--Yurik (talk) 22:21, 27 March 2014 (UTC)Reply

16 April discussion

edit

From yesterday's RfC chat:

21:03:31 <sumanah> #topic Reducing image quality for mobile
21:03:42 <TimStarling> the patch seems quite different to what yurik and I discussed at the architecture summit
21:03:50 <sumanah> ( but brion TimStarling - I may ask some follow-up questions at the end about a few other RfCs and pending things)
21:03:54 <sumanah> #link https://www.mediawiki.org/wiki/Requests_for_comment/Reducing_image_quality_for_mobile
21:04:15 <sumanah> #info I asked Yuri what he wanted: 1) an ok from ops to increase thumbnail storage by 2-3% and number of files by 15%, 2) from core/tim/etc to proceed with the proposed patch <yurik> assuming my proposed path is satisfactory to everyone's involved
21:04:19 <TimStarling> I thought that you should have only quality classes exposed, not expose an API allowing any integer percentage quality
21:04:59 <yurik> TimStarling, it would be fairly easy to change from a number to a string constant
21:05:11 <TimStarling> you suggest 30% but probably every mobile app will choose something different
21:05:12 <yurik> if this is a requirement of course
21:06:37 <yurik> TimStarling, this is similar to the problem we face with the thumbnail dimension - every wiki varying images by a few pixels. I propose a somewhat different solution here - an extension that does filtering/rounding of these numbers during the rendering
21:07:04 <sumanah> thedj: dfoy_ - http://bots.wmflabs.org/~wm-bot/logs/%23wikimedia-office/20140416.txt for the logs up till now
21:07:21 <TimStarling> I don't see any filtering or rounding in the patch
21:07:41 <yurik> example: user requested 240x250 image - the ext would say 250x250 already exists, or it is a multiple of 50, hence render it as a link to 250x250, with width=240
21:08:00 * aude waves
21:08:11 <aude> yurik: is this something your extension would do? rather than core?
21:08:12 <sumanah> Hi :)
21:08:12 <yurik> separate patch - as an extension - to address all such rounding requirements for both image size & quality
21:08:16 <TimStarling> yeah, you can read my thoughts on that on the relevant RFC
21:08:47 <yurik> aude, not ours, a new extension whose job is only to "standardize" on thumbnail generation
21:08:54 <AaronSchulz> gah
21:08:59 <aude> but not core?
21:09:11 <sumanah> AaronSchulz: I presume you think that's the wrong approach :)
21:09:28 * brion just added comment on the patch agreeing with idea to use quality classes rather than expsoing full integer range
21:09:33 <sumanah> #link https://gerrit.wikimedia.org/r/#/c/119661/ Gerrit changeset, "Allow mobile to reduce image quality"
21:09:34 <yurik> no, i think core should be more flexible - depending on the site
21:09:46 * aude prefers we allow any size, but not keep cached so long if it's not requested
21:09:58 <aude> if that's feasible
21:10:09 <TimStarling> me too
21:11:37 <thedj> can i ask what the primary purpose is ?
21:11:46 <thedj> reduce time to load ?
21:12:01 <sumanah> thedj: honest question: does the RfC address that? do you think the RfC should be clearer about the problem being solved?
21:12:03 <yurik> reducing quality? to lower bandwidth consumption
21:12:36 <thedj> yurik: so download time and download cost ?
21:13:30 <yurik> both
21:13:36 <thedj> Do we have some metrics/ideas to give us indications of how much benefit that would translate into ?
21:13:43 <yurik> especially when the bandwidth is donated
21:14:22 <yurik> thedj, 30-40%
21:14:55 <thedj> ah k. so it's to a large degree from the zero perspective that we want to do this.
21:15:01 <yurik> correct
21:15:36 <brion> i could see it being handy for hi-dpi devices as well, we could serve the double-size images with a medium quality setting to trade-off brandwidth and visual quality
21:15:38 <sumanah> BTW, for those who haven't looked, we now have a few more comments on the changeset https://gerrit.wikimedia.org/r/119661 in the last few minutes
21:15:50 <brion> but definitely the incentive is where we’re pushing donated bandwidth :)
21:15:55 <sumanah> (there's our brion always looking out for responsive design & gadget stuff :) )
21:16:19 <bawolff> My comment was just that it shouldn't touch the -quality setting on pngs, and a nitpick on the commit message
21:17:11 <gwicke> once we move to HTMl storage, is the idea to implement this as a DOM post-processing step?
21:17:39 <yurik> TimStarling, brion, please take a look at the https://www.mediawiki.org/wiki/Requests_for_comment/Reducing_image_quality_for_mobile#Possible_approaches
21:18:15 <yurik> it discusses the 3 paths to do this, with 1 path doing everything internally without exposing it via URL
21:18:43 <brion> *nod* i was assuming the first pass implementation once the qualitys etting was available...
21:18:52 <TimStarling> probably option 2
21:18:54 <brion> … was to do it as a dom postprocess step in mf+zero
21:19:09 <TimStarling> that's not on the list
21:19:24 <yurik> that's #3 i think
21:19:26 <brion> agh, i confused that with the js one
21:19:59 <yurik> tim, you think it is better to let varnish do automagical image url rewrite?
21:20:19 * AaronSchulz prefers js if possible
21:20:31 <TimStarling> how would it work with JS?
21:20:38 <yurik> because we won't have as much info in varnish, plus we would have to put too much biz-logic in varnish (ops won't like it)
21:20:43 <gwicke> one issue I see with Varnish is transparent downstream caches
21:20:52 <yurik> yes, that too
21:20:55 <TimStarling> a DOM ready event?
21:20:55 <gwicke> the third option (JS) avoids that
21:21:03 <yurik> JS would rewrite the URL
21:21:10 <brion> hmm
21:21:24 <brion> my main concern with that is rewriting urls in JS without often loading the original url is tricky
21:21:35 <TimStarling> I am wondering what the CPU requirements of option 3 are
21:21:37 <AaronSchulz> gwicke: related to downstream caches is handling purges
21:21:51 <TimStarling> and whether there will be flicker, browser incompatibilities, etc.
21:21:58 <gwicke> AaronSchulz, *nod*
21:22:02 <AaronSchulz> I guess if it's the very frontend cache it's fine
21:22:09 <TimStarling> we can't really waste the CPU of phones the same way we can desktop browsers
21:22:17 <gwicke> we'd have to send s-maxage-0
21:22:19 <yurik> workflow: zero ext changes src= to low quality, JS changes it back to highres if device/network is good
21:22:21 <gwicke> =0
21:22:34 <brion> :\
21:22:45 <yurik> how expensive is a JS image tag search?
21:22:55 <gwicke> it's pretty cheap I believe
21:23:07 <brion> replacing them may be slow if it’s a big page with lots of images though
21:23:19 <gwicke> one querySelectorAll call
21:23:24 <brion> and you’ve got the issue of loading the original images and then the new ones....
21:23:25 <TimStarling> image loading will start as soon as the img tag is created, right?
21:23:27 <yurik> percentage wise i still think it won't be much
21:23:35 <AaronSchulz> TimStarling: I think so :/
21:23:45 <gwicke> yeah, I think that's the bigger issue
21:23:57 <gwicke> we have a similar issue with the thumb size pref
21:23:58 <yurik> that's the big question - can the low->high quality img tag replacement be done before browser starts loadnig them?
21:24:07 <TimStarling> what about what brion said, why is that not an option?
21:24:19 <TimStarling> <brion> … was to do it as a dom postprocess step in mf+zero
21:24:22 <gwicke> if we can find a way to suppress the original thumb load before resizing / quality downgrading, then that would be awesome
21:24:46 <yurik> TimStarling, we would have to do it anyway, but there will be users who would want high-end images
21:25:03 <brion> i think we’re trying to avoid having php-time cacheable differences on zero….. it’s all very scary
21:25:35 <brion> in general, trying to scale for estimated network bandwidth is just a tricky tricky business
21:26:45 <sumanah> tfinc: http://bots.wmflabs.org/~wm-bot/logs/%23wikimedia-office/20140416.txt for chat so far
21:27:18 <yurik> there is another question - i am pretty sure there are many mobile users out there who don't have zero and who might want low bandwidth too
21:27:24 <TimStarling> what about having a separate new service to do DOM rewriting?
21:27:49 <yurik> so we really should have a mobile setting "auto/always high/always low"
21:28:01 <TimStarling> yurik: those users can put up with what we give them
21:28:05 <gwicke> TimStarling, that's doable for low volume
21:28:26 <gwicke> which zero is afaik
21:28:57 <gwicke> what are the peak request rates on zero in pages / s ?
21:29:00 <yurik> well, not those who are still on 2G, or who is paying high price for their internet.
21:29:24 <TimStarling> it's out of scope
21:29:31 <aude> yurik: then i'd want no images, if concerned about bandwidth (imho)
21:29:40 <aude> maybe my mobile browser allows that
21:29:45 <sumanah> Those who have questions for Max, he's here now
21:29:54 <TimStarling> the problem is complicated enough when it is just Zero
21:30:09 <dr0ptp4kt> fwiw, MobileFrontend already has an Images on/off toggle
21:30:13 <sumanah> (OK, maybe today's meeting WON'T be a short one after all.)
21:31:17 <TimStarling> dr0ptp4kt: does it work?
21:31:26 <TimStarling> or do the images start loading and then get aborted?
21:31:34 <dr0ptp4kt> TimStarling: it is completely rewritten html
21:31:40 <dr0ptp4kt> it works
21:31:40 <MaxSem> it works via DOM rewriting on PHP side
21:31:53 <gwicke> there are ways to parse html without loading images, using https://developer.mozilla.org/en-US/docs/Web/API/DOMParser for example
21:32:27 <gwicke> or XMLHttpRequest
21:32:27 <dr0ptp4kt> one caveat is supporting devices that don't support javascript, or rather "advanced javascript" as determined by rl
21:32:35 <TimStarling> gwicke: well, that's the kind of thing that I would expect to use a lot of client-side CPU
21:32:48 <gwicke> not really- it's using the normal html parser
21:32:59 <gwicke> it does rely on JS support though
21:33:05 <gwicke> and a non-sucky browser
21:33:13 <dr0ptp4kt> HA!
21:33:20 <gwicke> ;)
21:33:26 <MaxSem> I don't think that many devices we want to support will work well with this
21:33:53 <gwicke> are you using XMLHttpRequest currently?
21:34:06 <MaxSem> libxml2
21:34:14 <MaxSem> be its name foreveer cursed
21:34:22 <TimStarling> can someone give me a quick overview of how HTML delivery in MF works and what the plans for it are?
21:34:42 <dr0ptp4kt> we use xhr opportunistically. so it's usually to upgrade the experience, like avoid server roundtrips for newer phones
21:34:58 <dr0ptp4kt> er, bigger roundtrips
21:35:15 <MaxSem> шеэы ыешдд мукн кщгпр щт увпуы
21:35:18 <gwicke> I see, so you are hesitant to require it
21:35:25 <TimStarling> preferably in a latin script
21:35:28 <yurik> MaxSem, +2
21:35:34 <MaxSem> it's still quite buggy so is used only in alpha
21:35:54 <dr0ptp4kt> sumana, would you please wire up a translation bot now? :)
21:36:02 <MaxSem> plans are to fix it
21:36:06 <MaxSem> ...eventually
21:36:11 <MaxSem> ...maybe
21:36:35 <gwicke> I don't see an issue with DOM post-processing on the server and storing that HTML back
21:36:36 <dr0ptp4kt> yeah, the xhr for w0 is more like getting runtime config to do things ahead of caches being purged (e.g., add zero-rated support for an additional language)
21:36:58 <sumanah> dr0ptp4kt: I think here it would just emit those cartoon profanity things, like $%#%@
21:37:17 <gwicke> as long as there are only a few variants and the transforms build on a known DOM spec that should work well
21:37:52 <yurik> gwicke, zero already does a DOM post-parse rewrite to replace all external URL links with special warning URLs
21:37:59 <MaxSem> I would reeeeeally love to avoid doing it in PHP again
21:38:21 <gwicke> it's fairly easy in JS
21:38:27 <gwicke> you can use jquery etc
21:38:37 <MaxSem> wouldn't be lethal for zero which already does HTML transformations, but still sucks
21:38:50 <yurik> gwicke, assuming flip phone has it :(
21:38:59 <gwicke> yurik, I mean on the server
21:39:38 <yurik> do we have a framework for node.js extensions?
21:39:57 <gwicke> yurik, we have HTTP..
21:40:08 <gwicke> set up a service, make requests to it
21:40:24 <sumanah> So we're about 2/3 through the hour and I'm not sure what to #info :)
21:40:46 <yurik> gwicke, you mean PHP becomes a proxy to another service on internal network?
21:41:07 <yurik> in any case, this is an optimization for the future, outside of the scope imho
21:41:19 <TimStarling> sumanah: three of us wrote comments on the gerrit change
21:41:36 <gwicke> yurik, you can go through PHP if you want; depends on whether it adds info that would be hard to get otherwise
21:42:04 <AaronSchulz> do we actually need the wikitext syntax addition too?
21:42:14 <MaxSem> definitely not
21:42:16 * AaronSchulz leans toward not adding it
21:42:17 <brion> i think we don’t need the wikitext addition no
21:42:28 <brion> keep it opaque to that layer
21:42:32 <AaronSchulz> right
21:42:33 <TimStarling> #info comments were provided on the image quality gerrit patch
21:42:33 <brion> it’s a presentation-layer decision
21:42:34 <sumanah> !link https://www.mediawiki.org/wiki/Talk:Requests_for_comment/Reducing_image_quality_for_mobile#File_insertion_syntax
21:42:50 <sumanah> er
21:42:53 <gwicke> -1 on the extra syntax
21:42:56 <sumanah> #link https://www.mediawiki.org/wiki/Talk:Requests_for_comment/Reducing_image_quality_for_mobile#File_insertion_syntax on wikitext addition
21:43:03 <sumanah> can I say #agreed ? :)
21:43:07 <bawolff> +1 on not adding extra options to file syntax
21:43:27 <yurik> bawolff, how do you mean?
21:43:27 <TimStarling> #info image scaler backend relatively uncontroversial -- HTML/URL manipulation to access that API is more complex
21:43:38 <yurik> we need to distinguish low-quality URLs from the highs
21:43:41 <bawolff> yurik: I'm agreeing with everyone
21:44:03 <yurik> good position :)
21:44:05 <bawolff> yurik: as in not adding [[File:foo.jpg|quality=20]] ui
21:44:15 <yurik> gotcha
21:44:42 <TimStarling> #info gwicke predictably favours Node.JS service
21:44:48 <AaronSchulz> lol
21:44:53 <gwicke> hehe
21:45:06 <gwicke> that's in response to MaxSem's lament about libxml2 ;)
21:45:34 <MaxSem> having a service for that would be even more cruffty
21:45:39 <yurik> ok, i will change the URL syntax to image.jpg/100px-qlow-image.jpg this way we can later change it to some other magic keywords
21:45:45 <sumanah> So it's sounding like people think this is a relatively uncontroversial idea overall and we're just talking about implementation, right?
21:46:08 * tfinc reads the backscroll
21:46:20 <yurik> any objections to that URL format?
21:46:32 <bawolff> yurik: Maybe re-order those parameters. Easier to regex out qlow-100px from the actual name of the file
21:46:37 <sumanah> "this" being the RfC as a whole
21:46:53 <MaxSem> +1
21:46:54 <bawolff> since we're going to be presumably keeping 100px-image.jpg for the normal quality image
21:47:15 <gwicke> are we sure that we need a different URL?
21:47:21 <MaxSem> yes
21:47:29 <MaxSem> varnish rewrites are evil
21:48:16 <gwicke> do we already have info about zero ip ranges in varnish?
21:48:27 <yurik> ok, all settled, will implement the first step (core patch), and start implementing JS magic
21:48:43 <MaxSem> gwicke, for all that is holy, don't
21:48:44 <yurik> gwicke, yes, varnish detects zero based on ip
21:48:52 <sumanah> #info <yurik> ok, all settled, will implement the first step (core patch), and start implementing JS magic
21:49:15 <MaxSem> especially since now only mobile varnishes know about zero
21:49:19 <gwicke> hmm, then it might not actually be that hard to use that for image request rewriting
21:50:31 <yurik> #info required modifications: use string instead of integer "qlow-100px-image.jpg", make it JPG only (no png)
21:50:31 <gwicke> I'd be against adding that info if it wasn't there already; but since it's already there it seems that the extra complexity would be fairly limited
21:50:46 <TimStarling> varnish doesn't have a lot of string handling built in, but you can use inline C, I did it once...
21:51:17 <MaxSem> regexping it would actually be possiblee
21:51:34 <MaxSem> but still this would SUCK
21:52:00 <sumanah> I have a few min of "what's up next week + other RfC news you should be aware of" to say before the end of the hour.
21:52:04 <sumanah> Any closing statements?
21:52:39 <TimStarling> modules/varnish/templates/vcl/wikimedia.vcl.erb was my own little bit of varnish URL manipulation
21:53:30 <yurik> if we are done, would love to get +2 for https://gerrit.wikimedia.org/r/#/c/109853/
21:54:34 <TimStarling> #info Tim skeptical about client-side JS rewrite: potential for CPU usage, flicker, image load aborts, browser incompatibilities, etc.
21:55:21 <gwicke> avoiding a double-load is hard afaik
21:55:41 <TimStarling> which is an argument for doing it on the server side
21:55:49 <AaronSchulz> yeah it may not be possible to use JS
21:55:50 <gwicke> or in Varnish
21:56:04 <AaronSchulz> so it's 1-2
21:56:18 <TimStarling> we have so many powerful tools on the server side now, we shouldn't be so keen to offload processing
21:57:00 <gwicke> for normal desktop page views the thumb size pref is pretty much the only one that can't be easily handled in CSS
21:57:53 <gwicke> so if we can find a way to do this in Varnish it might be possible to implement those prefs purely in CSS

Comment on Javascript scaling

edit

When working on a mobile site with several others and we tried the JS dynamic size option. The primary issue we encountered was flicker, or browser incompatibility/bugginess on older devices, mostly android 2.2 devices using carrier or manufacturer installed browsers. It didn't handle it well, and occasionally there were several stages of flicker, although we did not determine exactly why. We were just a few people, and after we saw the issues we just decided to use a low quality iamge. Perhaps a better implementation could be created here (I'm sure something better could be, but how much better I'm not sure), but fair warning. NativeForeigner (talk) 23:37, 17 April 2014 (UTC)Reply

IRC discussion 11 June

edit

From Architecture meetings/RFC review 2014-06-11:

Please see Architecture meetings/RFC review 2014-06-11 for the full log. Sharihareswara (WMF) (talk) 19:54, 12 June 2014 (UTC)Reply

Return to "Requests for comment/Reducing image quality for mobile" page.