We discuss the problem of ranking
k instances with the use of a ``large margin'' principle. We introduce
two main approaches: the first is the ``fixed margin'' policy in which
the margin of the closest neighboring classes is being maximized --- which
turns out to be a direct generalization of SVM to ranking learning. The
second approach allows for k-1 different margins where the sum of margins
is maximized. This approach is shown to reduce to $\nu SVM$ when the number
of classes k=2. Both approaches are optimal in size of 2l where l is the
total number of training examples.
Experiments performed on visual
classification and ``collaborative filtering'' show that both approaches
outperform existing ordinal regression algorithms applied for ranking
and multi-class SVM applied to general multi-class classification.