Deep Neural Networks (DNNs) have gained significant popularity in various Natural Language Processing tasks. However, the lack of interpretability of DNNs induces challenges to evaluate the robustness of DNNs. In this paper, we particularly focus on DNNs on sentiment analysis and conduct an empirical investigation on the sensitivity of DNNs. Specifically, we apply a scoring function to rank words importance without depending on the parameters or structure of the deep neural model. Then, we scan characteristics of these words to identify the model's weakness and perturb words to craft targeted attacks that exploit this weakness. We conduct extensive experiments on different neural network models across several real-world datasets. We report four intriguing findings: i) modern deep learning models for sentiment analysis ignore important sentiment terms such as opinion adjectives (i.e., amazing or terrible), ii) adjective words contribute to fooling sentiment analysis models more than other Parts-of-Speech (POS) categories, iii) changing or removing up to 10 adjectives words in a review text only decreases the accuracy up to 2%, and iv) modern models are unable to recognize the difference between an objective and a subjective review text.