In this early paper from 2006, Barreno et al. discuss research directions regarding the security in machine learning (also known as adversarial machine learning). As such, the work by Barreno et al. might be one of the earliest discussion of adversarial machine learning . While many terms and definitions are not consistent with recent literature, they consider a large part of possible attacks studied in recent publications. For example, they categorize attacks along three axes. First, they distinguish between causative attacks influencing the training process and exploratory attacks not changing the training process. Second, they consider targeted attacks for one specific example; and indiscriminate attack targeting a larger set of examples. Note that this is in contrast to today's notion of targeted and non-targeted attacks (usually describing the fact whether misclassification for a specific target class is requested). Third, they consider integrity attacks, which target a misclassification, and availability attacks which are aimed to reduce the availability and use of systems (similar to denial-of-service attacks). Unfortunately, these three axes do not consider several important aspects of machine learning systems. For example, I found an explicit discrimination between supervised and (semi-)/unsupervised learning or online and offline training missing. Additionally, the difference between attacks during training time and testing time is not emphasized. Finally, they also discuss possible future research directions regarding defenses to these attacks; these are summarized in Table 2.