Zero-quantised discrete cosine transform prediction technique for video encoding

H. Wang*, S. Kwong, C. W. Kok, M. Y. Chan

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

2 Citations (Scopus)


A new analytical model to eliminate redundant discrete cosine transform (DCT) and quantisation (Q) computations in block-based video encoders is proposed. The dynamic ranges of the quantised DCT coefficients are analysed, then a threshold scheme is derived to determine whether the DCT and Q computations can be skipped without video quality degradation. In addition, fast DCT/inverse DCT (IDCT) algorithms are presented to implement the proposed analytical model. The proposed analytical model is compared with other comparable analytical models reported in the literature. Both the theoretical analysis and experimental results demonstrate that the proposed analytical model can greatly reduce the computational complexity of video encoding without any performance degradation and outperforms other analytical models.

Original languageEnglish
Pages (from-to)677-683
Number of pages7
JournalIEE Proceedings: Vision, Image and Signal Processing
Issue number5
Publication statusPublished - 2006


Dive into the research topics of 'Zero-quantised discrete cosine transform prediction technique for video encoding'. Together they form a unique fingerprint.

Cite this