MFCC are features commonly used in speech recognition systems today. The recognition accuracy of systems using MFCC is known to be high in clean speech environment, but it drops greatly in noisy environment. In this paper, we propose new features called the auditory spectrum based features (ASBF) that are based on the cochlear model of the human auditory system. These new features can track the formants and the selection scheme of these features is based on the second order difference cochlear model and the primary auditory nerve processing model. In our experiment, the performance of MFCC and the ASBF are compared in clean and noisy environments. The results suggest that the ASBF are much more robust to noise than MFCC.