A new analytical model to eliminate redundant discrete cosine transform (DCT) and quantisation (Q) computations in block-based video encoders is proposed. The dynamic ranges of the quantised DCT coefficients are analysed, then a threshold scheme is derived to determine whether the DCT and Q computations can be skipped without video quality degradation. In addition, fast DCT/inverse DCT (IDCT) algorithms are presented to implement the proposed analytical model. The proposed analytical model is compared with other comparable analytical models reported in the literature. Both the theoretical analysis and experimental results demonstrate that the proposed analytical model can greatly reduce the computational complexity of video encoding without any performance degradation and outperforms other analytical models.
|Number of pages||7|
|Journal||IEE Proceedings: Vision, Image and Signal Processing|
|Publication status||Published - 2006|