Well if you read the article and not the summary, the authors are discussing that there doesn't seem to be any fundamental changes coming anytime soon. Sure newer codecs are coming out but they are all the same approach. It's like if we discussing public key cryptography and the algorithms used. Imagine if RSA was the only real technique and the only new changes coming out were merely larger keys and that other techniques like elliptic curves didn't exist.
I think your analogy is somewhat flawed. Public key cryptography was in somewhat of the same "rut" as video codec. Video codecs have been stuck on hybrid block techniques and Public key cryptography has been stuck using modulo arithmetic (RSA, and Elliptical curves both use modulo arithmetic although they depend on the difficulty of inverting different mathematical operations in modulo arithmetic).
There are of course other hard math problem that can be used in public key cryptography (lattices, knapsack, error-correcting codes, hash based) and they languished for years until the threat of quantum computing cracking the incumbent technology...
Similarly, I predict hybrid block techniques will likely dominate video encoding until a disruption (or in mathematical catastrophe theory parlance a bifurcation) shows the potential for being 10x better (because 1.2x or 20% better doesn't even pay for your lunch). It doesn't have to be 10x better out of the gate, but if it can't eventually be 10x better, why spend time optimizing it as much as hybrid block encoding. Nobody wants to be developing something that doesn't have legs for a decade or more. The point isn't to find something different for the sake of difference, it's to find something that has legs (even if it isn't better today).
The problem with finding something with "legs" in video encoding, is that we do not fully understand video. People don't really have much of a theoretical framework to measure one lossy video compression scheme against another (except for "golden-eyes" which depend on what side of the bed you wake up on). Crappy measures like PSNR and SSIM to estimate the loss-ratio vs entropy are still being used because we don't have anything better. One of the reasons people stick to hybrid block coding is that the artifacts are somewhat known even if they can't be measured so it is somewhat easier to make sure you are making forward progress. If the artifacts are totally different (as they would likely be for a different lossy coding scheme), it is much more difficult to compare if you can't objectively measure it to optimize it (the conjoint analysis problem).
So until we have better theories about what makes a better video codec, people are using "art" to simulate science in this area, and as with most art, it's mostly subjective and it will be difficult to convince anyone of a 10x potential if it is only 80% today. If people *really* want to find something better, we need to start researching more on the measurement problem and less about the artistic aspects. It's not that people haven't tried (e.g., VQEG, but simply very little has come from the efforts to date and there has been little pressure to keep the ball moving forward.
In contrast, the math of hard problems for public key cryptography is a very productive area of research and the post-quantum-encryption goal has been driving people pretty hard.
Generally speaking, if you measure it, it can be improved and it's easier to measure incremental progress than big changes on a different dimension.