Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Need loss evaluation baseline (loss before attack) #15

Closed
jcpeterson opened this issue Nov 15, 2018 · 4 comments
Closed

Need loss evaluation baseline (loss before attack) #15

jcpeterson opened this issue Nov 15, 2018 · 4 comments

Comments

@jcpeterson
Copy link

You can currently see the accuracy before the attack, but not the loss before the attack. Advice?

@tianweiy
Copy link
Contributor

tianweiy commented Nov 15, 2018

Hi, we currently don't support the loss before the attack. Personally, I just think it is not that common to compare the loss values. However, I write a simple script to do the work. In the IdentityEvaluation class of the adversarial_evaluation script, please make the following changes

in the init method
self.results = {'top1': utils.AverageMeter(), 'avg_loss_value': utils.AverageMeter()}
which adds a avg_loss_value argument so that it will now output the average loss for ground truth images.

Then change the eval method to the following code.

` def eval(self, examples, labels):
assert list(self.results.keys()) == ['top1', 'avg_loss_value']

    # eval the average accuracy
    ground_avg_accuracy = self.results['top1']
    ground_avg_loss = self.results['avg_loss_value']
    ground_output = self.classifier_net(self.normalizer(Variable(examples)))
    minibatch = float(examples.shape[0])

    # update ground accuracy
    ground_accuracy_int = utils.accuracy_int(ground_output,
                                            Variable(labels), topk=1)
    ground_avg_accuracy.update(ground_accuracy_int / minibatch,
                      n=int(minibatch))

    # eval the ground loss
    criterion = torch.nn.CrossEntropyLoss(reduction = 'sum');
    ground_loss_int = float(criterion(ground_output, Variable(labels)));
    ground_avg_loss.update(ground_loss_int / minibatch,
                      n=int(minibatch))`

You can change the loss function by changing the criterion variable. Additionally, if your current version doesn't support Loss(reduction) please use size_average = False instead.

I am not very familiar with the evaluation part of this tool_box, so I will discuss with Matt to decide whether we should add this support and find better way to implement it. Hope the above program helps.

@revbucket
Copy link
Owner

Thanks Tianwei! That's probably the best way to do this. Though we'll probably want to make it so users can use any desired loss. So I'll update the IdentityEvaluation to compute loss, but also make loss be a kwarg to IdentityEvaluation.

@jcpeterson : Here's the technique then:
If you're okay with using crossEntropy loss in your evaluation, then this will be default and you can just do as Tianwei suggests and just call eval_ensemble_out['ground']['avg_loss_value']

If you want to do a custom loss, then build your own IdentityEvaluation instance with the loss_fxn kwarg with your desired loss, then include this in your attack_ensemble, and call evaluate_ensemble with skip_ground=True.

This is on my dev branch right now. I'll make sure this works and have it folded in to master by EOD.

@jcpeterson
Copy link
Author

Thanks!! I will add this! Let me try this out and see how it works.

@revbucket
Copy link
Owner

Obtw, this is in master now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants