The effective classification for imagined speech and intended speech is of great help to the development of speech-based brain-computer interfaces (BCIs). This work distinguished imagined speech and intended speech by employing the cortical EEG signals recorded from scalp. EEG signals from eleven subjects were recorded when they produced Mandarin-Chinese monosyllables in imagined speech and intended speech, and EEG features were classified by the common spatial pattern, time-domain, frequency-domain and Riemannian manifold based methods. The classification results indicated that the Riemannian manifold based method yielded the highest classification accuracy of 85.9% among the four classification methods. Moreover, the classification accuracy with the left-only brain electrode configuration was close to that with the whole brain electrode configuration. The findings of this work have potential to extend the output commands of silent speech interfaces.