The success of Machine Learning (ML) techniques in security applications, such as malware detection, is highly criticized for their vulnerability to Adversarial Examples (AE): perturbed input samples (e.g. malware) can mislead ML to produce an adversary's desired output (e.g. benign class label). AEs against ML models are broadly studied in the computer vision domain where the adversary perturbs the pixel values of an image such that the change is not perceptible, but the resulting image is misclassified by the model. We investigate the effectiveness of attack techniques proposed in the image domain to attack ML classifiers in the context of mobile malware detection. Since the feature vector representation of samples is often used in ML, a simplified evaluation of ML classifiers' robustness to AEs is to study feature-based attack models, where the adversary perturbs the input features. We compare the methods, trade-offs, and gaps for such attack models and show that generative models (e.g. GANs) outperform a selection of existing attacks in terms of attack success rate but apply large distortion to the original sample. We also describe how we use the generated samples for increasing a classifier's robustness through adversarial training.