In this paper, we propose a plastic auditory model based neural network for speech enhancement. The proposed system integrates a spectro-temporal analytical auditory model with a multi-layer fully-connected network to form a quasi-CNN structure. The initial kernels of the convolutional layer are derived from the neuro-physiological auditory model. To simulate the plasticity of cortical neurons for attentional hearing, the kernels are allowed to adjust themselves pertaining to the task at hand. For the application of speech enhancement, the Fourier spectrogram instead of the auditory spectrogram is used as input to the proposed neural network such that the cleaned speech signal can be well reconstructed. The proposed system performs comparably with standard DNN and CNN systems when plenty resources are available. Meanwhile, under the limited-resource condition, the proposed system outperforms standard systems in all test settings.