Convolutional neural networks (CNNs) are known to be vulnerable to adversarial attacks. Well-crafted perturbations to the inputs can mislead a state-of-the-art CNN to make wrong decisions. Therefore, there is a pressing need for the development of methods that can test or detect the vulnerability of CNNs. In this study, we propose an adversarial attack method, called Dual Iterative Fusion (DIF) with potential critical pixels, for CNN testing to reveal the vulnerability of CNNs. DIF modifies as few as 5 pixels out of 32x32 images in this study and achieves faster, less noticeable, and more targeted attacks to a CNN. Testing CNNs with DIF, we observed that some classes are more vulnerable than the others within many classical CNNs for image classification. In other words, some classes are susceptible to misclassification due to adversarial attacks. For example, in VGG19 trained with CIFAR10 data set, the vulnerable class is 'Cat'. The successfully-targeted attack rate of class 'Cat' in VGG19 is obviously higher than the others, 57.01% versus 25%. In the ResNet18, the vulnerable class is 'Plane', with a successfully-targeted attack rate of 37.08% while the other classes are lower than 12%. These classes should be considered as vulnerabilities in the CNNs, and are pinpointed by generating test images using DIF. The issues can be mitigated through retraining the CNNs with the adversarial images generated by DIF, and the misclassification rate of the vulnerable classes declines at most from 61.67% to 6.37% after the retraining.