This work devises a maximum-margin sparse coding algorithm, jointly considering reconstruction loss and hinge loss in the model. The sparse representation along with maximum-margin constraint is analogous to kernel trick and maximum-margin properties of support vector machine (SVM), giving a base for the proposed algorithm to perform well in classification tasks. The key idea behind the proposed method is to use labeled and unlabeled data to learn discriminative representations and model parameters simultaneously, making it easier to classify data in the new space. We propose to use block coordinate descent to learn all the components of the proposed model and give detailed derivation for the update rules of the model variables. Theoretical analysis on the convergence of the proposed MMSC algorithm is provided based on Zangwill's global convergence theorem. Additionally, most previous research studies on dictionary learning suggest to use an overcomplete dictionary to improve classification performance, but it is computationally intensive when the dimension of the input data is huge. We conduct experiments on several real data sets, including Extended YaleB, AR face, and Caltech101 data sets. The experimental results indicate that the proposed algorithm outperforms other comparison algorithms without an overcomplete dictionary, providing flexibility to deal with high-dimensional data sets.
- Block coordinate descent
- Sparse coding