Skip to footer
Home Research Artificial Intelligence / Machine Learning LSQ++: lower running time and higher recall in multi-codebook quantization

LSQ++: lower running time and higher recall in multi-codebook quantization

0

Abstract

Multi-codebook quantization (MCQ) is the task of expressing a set of vectors as accurately as possible in terms of discrete entries in multiple bases. Work in MCQ is heavily focused on lowering quantization error, thereby improving distance estimation and recall on benchmarks of visual descriptors at a fixed memory budget. However, recent studies and methods in this area are hard to compare against each other, because they use different datasets, different protocols, and, perhaps most importantly, different computational budgets. In this work, we first benchmark a series of MCQ baselines on an equal footing and provide an analysis of their recall-vs-running-time performance. We observe that local search quantization (LSQ) is in practice much faster than its competitors, but is not the most accurate method in all cases. We then introduce two novel improvements that render LSQ (i) more accurate and (ii) faster. These improvements are easy to implement, and define a new state of the art in MCQ.

Authors

Julieta Martinez, Shobhit Zakhmi, Holger H. Hoos, and James J. Little

Conference

ECCV 2018

Full Paper

‘LSQ++: lower running time and higher recall in multi-codebook quantization’ (PDF)

Uber ATG

Comments