Abstract
Due to the development of technologies, such as the Internet and mobile communication, news production is increasing day by day. Proper news delivery can lead to a thriving economy and disseminate knowledge. However, in addition to disrupting the existing order, fake news may create incorrect values and even beliefs. Therefore, detecting the authenticity of news is an extremely important issue. At present, many scholars have used artificial intelligence (AI) to detect fake news, achieving excellent results. However, once humans become dependent on AI, adversarial examples (AEs) can deceive the AI model and allow humans to receive false information. We have discovered that samples from different categories result in distinct and independent activation state distributions for each neuron. Therefore, this study proposes a method that detects adversarial samples of fake news by observing the activation states of neurons and modeling them as a Poisson distribution. The results of the experiment showed that our method can effectively detect AEs mixed in normal data and remove them, thereby improving the classification accuracy of the model by about 17%. The experimental results show that the method proposed in this article can improve the detection accuracy of fake news AEs.
Original language | English |
---|---|
Pages (from-to) | 5199-5209 |
Number of pages | 11 |
Journal | IEEE Transactions on Computational Social Systems |
Volume | 11 |
Issue number | 4 |
DOIs | |
State | Published - 2024 |
Keywords
- Adversarial examples (AEs)
- artificial intelligence (AI)
- fake news detection