Monthly Archives: January 2013

Leaking logs since 2011

Pffthahaha.

Undot and an extension

Some time ago I wrote a Vapoursynth plugin for Undot’s functionality, with an extra “norow” option, inspired by degrainmedian. This is of course obsoleted by the recently-released GenericFilters package, from which you can build identical functionality using Maximum, Minimum and Limiter.

The interesting part isn’t that, however. That’d be the “norow” option; the way I implemented it was to use only six pixels in the cell neighbourhood to clamp the current pixel to, rather than eight. I’ve had this idea since I got to encoding AKB0048 (which I can’t even call dropped because it never really started), which actually grew from another idea.

Usually, when dealing with Japanese television, there’ll be a bunch of frames that are filled with artifacts. Well, more artifacts than the typical blocking and banding. Have a sample. These primarily occur around scene changes; the noise follows the contours of the previous scene’s edges fairly closely.

In this particular frame, I could’ve frozen it to the next frame, but I’m in a group with an encoder who doesn’t believe in :effort:ing TV encodes. (I reiterate what I’ve said on IRC plenty of times: I do not disagree. Spending lots of time making 24 minutes of some usually lousily drawn cartoon look better is very wasteful and inefficient. The way RHE does things gets things done.) We need some way to automate this. I don’t believe in freezing frames to non-identical frames (which is the prevalent practice, and for which there is a script to do around scene changes), which could make pans jerky, for example.

The astute will notice that the artifacts are mainly only in every other line. In other words, it’s a byproduct of interlacing/telecine and lossy compression. If you try to hit it with vinverse, you’ll only just smooth out the artifacts a bit. Have a pic of vinverse applied to the same frame. It’s not really better or worse if you scale it down to 720p, since that’d also blend the artifacts. Is there any recourse other than to simply give up and hope viewers won’t notice? Yes, yes there is.

We can use nnedi3. Assuming the artifacts are only in the bottom field, we can use nnedi3(field=1) to get rid of them. And it works pretty well. But what if you guess the field wrong? Your artifacts spill into the once-clean field, doubling the overall artifacting rather than removing it. You dun goofed. To make things more interesting, the location of artifacts depends on the phase of moon, so you can’t just cross your fingers and hope field=0/1 works every time. Well, you can actually since for a specific show, one will be better than the other most of the time. But we want perfection.

Based on a suggestion by tp7, I whipped up a script to detect which of the two fields has more noise/detail, then discarding that (since artifacts are “details” and we want to get rid of them). Detail detection was done simply by comparing a field to a median-filtered version of itself, then taking the average difference for the whole field; the differences for the two fields are then compared and thresholded. If the amount of “detail” is roughly the same, the frame would be left alone, but if one frame had significantly more “detail”, it’d be discarded and interpolated with nnedi3. This was a huge step up from the past. (I no longer have this script; there might be a version on Pastebin.) In fact, I even made a mask to detect horizontal lines, which had vinverse applied instead. Still not perfect by any means though; this strategy completely discards one field and is hardly safe to use on general frames.

Then I came up with the idea of using a 6-point Undot (which I named undot6, creatively enough). Junk fields are usually noisy, not constant-colour, and we can exploit this fact to “safely” remove them. By clipping them to the pixels above and below, clean pixels are likely to remain clean, while junk pixels become clean! Neither RemoveGrain(3) nor RemoveGrain(4) are quite as good at removing such artifacts as undot6, yet RG(3/4) blur/deform lines more than undot6. Getting somewhere, aren’t we? That said, I set this aside back when I was working on AKB0048 because, well, that got dropped before anything was released at all. At the time I wasn’t looking for a one-size-fits-all solution anyway, since I had YATTA’s postprocessing flagging at my disposal.

At some point Xythar asked me to look at Jormungand’s transport stream, so he could encode S2 slightly less shittily than S1 was done. (This was for gg’s releases.) I can’t remember where this fits in the timeline of this story, or whether it’s involved at all. Damn my unreliable memory. I checked IRC logs, which indicates that this happened on 5 September, while the first semi-private release of that filter was 12 September.

Time passed, then K happened. I don’t know why I cared about K in particular; the initial lightly-filtered premux came to 700 MB (lol), then I suggested to use Deblock_QED. That, along with raising the CRF a bit, reduced the file size to 600 MB or so. I had a look at the transport stream, then refined undot6 by adding a bunch of bells and whistles. Imagine a frame where the fields are different solid colours. Using undot6 would only cause the colours to switch position (ignoring the behaviour at the edges). Applying undot6 again would just switch the colours back (again, ignoring edge behaviour). Combing is a common issue on transport streams too, because MPEG-2 is terrible at dealing with hard telecined (or interlaced, in general) stuff. One way is to simply ignore this, because you’re supposed to downscale to 720p, which would blend the lines anyway. Yet if you care a bit more, this could also be handled by averaging the result of one undot6 call with the result of two undot6 calls, with the added benefit of nuking more junk.

Lots of hurfdurfing later, I released the results of this experiment to some other encoders. I don’t know why people like to keep their avs scripts as top secret or whatnot, because making them public would certainly let other people improve on your work… Yeah I guess people are too egoistical to let that happen.