Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

评价指标 #2

Open
LG-SS opened this issue Jul 30, 2019 · 9 comments
Open

评价指标 #2

LG-SS opened this issue Jul 30, 2019 · 9 comments

Comments

@LG-SS
Copy link

LG-SS commented Jul 30, 2019

能提供一下你们的评价指标代码吗,为什么有些我测的有些指标跟论文不一致

@JasonSWFu
Copy link
Owner

Hi, I guess you are running the dataset used in table2. Please use https://github.com/JasonSWFu/MetricGAN/blob/master/pesq_cd.m for PESQ evaluation. Please also note that, as mentioned in the paper, the input features and activation functions used in table 2 are different from those provided here.

@LG-SS
Copy link
Author

LG-SS commented Jul 31, 2019

Thx

@LG-SS
Copy link
Author

LG-SS commented Aug 12, 2019

Hi, I guess you are running the dataset used in table2. Please use https://github.com/JasonSWFu/MetricGAN/blob/master/pesq_cd.m for PESQ evaluation. Please also note that, as mentioned in the paper, the input features and activation functions used in table 2 are different from those provided here.

hello, 对于table2的实验,num_of_sampling和GAN_epoch分别设置的多少呢,提供的代码中的分别为100跟200,应该太小了吧

@aishinchi
Copy link

您好,我这边不改变参数的情况下,基于Table2的数据集上训练了200个epoch(num_of_sampling=100)后,测试集上的平均PESQ只有2.5左右(Table上为2.86,使用pesq_cd.m),请问可能是什么原因呢?Generator 的loss为0.10左右。

@JasonSWFu
Copy link
Owner

Hi, as mentioned in the paper, the input features and activation functions used in table 2 are different from those provided here. To easily get improved results, you can try to apply np.log1p on the input features. ( Lp=np.abs(F) => Lp=np.log1p(np.abs(F)) and E=np.squeeze(noisy_LPmask)
=> E=np.expm1(np.squeeze(noisy_LP
mask)) )

@aishinchi
Copy link

Hi, as mentioned in the paper, the input features and activation functions used in table 2 are different from those provided here. To easily get improved results, you can try to apply np.log1p on the input features. ( Lp=np.abs(F) => Lp=np.log1p(np.abs(F)) and E=np.squeeze(noisy_LP_mask) => E=np.expm1(np.squeeze(noisy_LP_mask)) )

Sincerely thank you for your kindness. I would try the features you said.

@LexiYIN
Copy link

LexiYIN commented Mar 30, 2021

Hi, as mentioned in the paper, the input features and activation functions used in table 2 are different from those provided here. To easily get improved results, you can try to apply np.log1p on the input features. ( Lp=np.abs(F) => Lp=np.log1p(np.abs(F)) and E=np.squeeze(noisy_LP_mask) => E=np.expm1(np.squeeze(noisy_LP_mask)) )

Hi, I find it already applied the np.log1p in your original code, and I got mean PESQ = 2.523, any other try?

@JasonSWFu
Copy link
Owner

JasonSWFu commented Mar 30, 2021 via email

@LexiYIN
Copy link

LexiYIN commented Mar 30, 2021

Hi You can directly try this code: https://github.com/JasonSWFu/MetricGAN/blob/master/MetricGAN(tableII).py Lexi @.***> 於 2021年3月30日 週二 下午2:19寫道:

Hi, as mentioned in the paper, the input features and activation functions used in table 2 are different from those provided here. To easily get improved results, you can try to apply np.log1p on the input features. ( Lp=np.abs(F) => Lp=np.log1p(np.abs(F)) and E=np.squeeze(noisy_LP_mask) => E=np.expm1(np.squeeze(noisy_LP_mask)) ) Hi, I find it already applied the np.log1p in your original code, and I got mean PESQ = 2.523, any other try? — You are receiving this because you commented. Reply to this email directly, view it on GitHub <#2 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AG2ROTYPTQKABU4MYNULPEDTGFUNTANCNFSM4IH23KHA .

Hi, indeed , that is what I have trained .

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants