-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Need loss evaluation baseline (loss before attack) #15
Comments
Hi, we currently don't support the loss before the attack. Personally, I just think it is not that common to compare the loss values. However, I write a simple script to do the work. In the IdentityEvaluation class of the adversarial_evaluation script, please make the following changes in the init method Then change the eval method to the following code. ` def eval(self, examples, labels):
You can change the loss function by changing the criterion variable. Additionally, if your current version doesn't support Loss(reduction) please use size_average = False instead. I am not very familiar with the evaluation part of this tool_box, so I will discuss with Matt to decide whether we should add this support and find better way to implement it. Hope the above program helps. |
Thanks Tianwei! That's probably the best way to do this. Though we'll probably want to make it so users can use any desired loss. So I'll update the @jcpeterson : Here's the technique then: If you want to do a custom loss, then build your own This is on my |
Thanks!! I will add this! Let me try this out and see how it works. |
Obtw, this is in master now. |
You can currently see the accuracy before the attack, but not the loss before the attack. Advice?
The text was updated successfully, but these errors were encountered: