困惑度(Perplexity,缩写为PPL)是衡量语言模型好坏的一个常用指标。
语言模型(language model)是用来预测句子中的next word的概率分布(probability distribution),并计算一个句子的概率。一个好的语言模型,应该给well-written 的句子更高的生成概率,阅读这些句子不应该让人感到困惑。
困惑度的定义:
p
e
r
p
l
e
x
i
t
y
(
W
)
=
P
(
w
1
w
2
.
.
.
w
n
)
?
1
N
perplexity(W)=P(w_1w_2...w_n)^{-\frac{1}{N}}
perplexity(W)=P(w1?w2?...wn?)?N1?
在语言模型在测试集
W
=
{
w
1
,
w
2
,
.
.
.
,
w
N
}
W=\{w_1, w_2, ..., w_N\}
W={w1?,w2?,...,wN?}上的困惑度,是测试集的逆概率,然后用单词数量进行归一化。
核心思想是,句子的概率越大,其困惑度越小,说明语言模型越好。
假设我们的语言模型,词表只有[“a”, “the”, “red”, “fox”, “dog”, “.”] 六个词。
下面计算“a red fox.”这句话 W W W的概率。
P
(
W
)
=
P
(
w
1
w
2
.
.
.
w
n
)
P(W)=P(w_1w_2...w_n)
P(W)=P(w1?w2?...wn?)
所以:
P
(
a
?
r
e
d
?
f
o
x
.
)
=
P
(
a
)
?
P
(
r
e
d
∣
a
)
?
P
(
f
o
x
∣
a
?
r
e
d
)
?
P
(
.
∣
a
?
r
e
d
?
f
o
x
)
P(a\ red\ fox.)=P(a)*P(red|a)*P(fox|a\ red)*P(.|a \ red\ fox)
P(a?red?fox.)=P(a)?P(red∣a)?P(fox∣a?red)?P(.∣a?red?fox)
假设:
句子中首字的概率如下:
P
(
w
1
=
a
)
=
0.4
P(w_1=a)=0.4
P(w1?=a)=0.4
P
(
w
1
=
t
h
e
)
=
0.3
P(w_1=the)=0.3
P(w1?=the)=0.3
P
(
w
1
=
r
e
d
)
=
0.15
P(w_1=red)=0.15
P(w1?=red)=0.15
P
(
w
1
=
f
o
x
)
=
0.08
P(w_1=fox)=0.08
P(w1?=fox)=0.08
P
(
w
1
=
d
o
g
)
=
0.07
P(w_1=dog)=0.07
P(w1?=dog)=0.07
P
(
w
1
=
.
)
=
0
P(w_1=.)=0
P(w1?=.)=0
所以 P ( a ) = 0.4 P(a)=0.4 P(a)=0.4
然后,假设我们的模型给出了前一个词为a,后一个词的概率分布:
P
(
w
2
=
a
∣
a
)
=
0.01
P(w_2=a|a)=0.01
P(w2?=a∣a)=0.01
P
(
w
2
=
t
h
e
∣
a
)
=
0.01
P(w_2=the|a)=0.01
P(w2?=the∣a)=0.01
P
(
w
2
=
r
e
d
∣
a
)
=
0.27
P(w_2=red|a)=0.27
P(w2?=red∣a)=0.27
P
(
w
2
=
f
o
x
∣
a
)
=
0.3
P(w_2=fox|a)=0.3
P(w2?=fox∣a)=0.3
P
(
w
2
=
d
o
g
∣
a
)
=
0.4
P(w_2=dog|a)=0.4
P(w2?=dog∣a)=0.4
P
(
w
2
=
.
∣
a
)
=
0.01
P(w_2=.|a)=0.01
P(w2?=.∣a)=0.01
所以 P ( r e d ∣ a ) = 0.27 P(red|a)=0.27 P(red∣a)=0.27
类似地,假设我们的模型给出了前两个词为a red,第三个词的概率分布;以及前三个词为a red fox,第四个词的概率分布:
P
(
w
3
=
a
∣
a
?
r
e
d
)
=
0.02
P(w_3=a|a\ red)=0.02
P(w3?=a∣a?red)=0.02
P
(
w
3
=
t
h
e
∣
a
?
r
e
d
)
=
0.03
P(w_3=the|a\ red)=0.03
P(w3?=the∣a?red)=0.03
P
(
w
3
=
r
e
d
∣
a
?
r
e
d
)
=
0.03
P(w_3=red|a\ red)=0.03
P(w3?=red∣a?red)=0.03
P
(
w
3
=
f
o
x
∣
a
?
r
e
d
)
=
0.55
P(w_3=fox|a\ red)=0.55
P(w3?=fox∣a?red)=0.55
P
(
w
3
=
d
o
g
∣
a
?
r
e
d
)
=
0.22
P(w_3=dog|a\ red)=0.22
P(w3?=dog∣a?red)=0.22
P
(
w
3
=
.
∣
a
?
r
e
d
)
=
0.15
P(w_3=.|a\ red)=0.15
P(w3?=.∣a?red)=0.15
以及
P
(
w
4
=
a
∣
a
?
r
e
d
?
f
o
x
)
=
0.02
P(w_4=a|a\ red\ fox)=0.02
P(w4?=a∣a?red?fox)=0.02
P
(
w
4
=
t
h
e
∣
a
?
r
e
d
?
f
o
x
)
=
0.03
P(w_4=the|a\ red\ fox)=0.03
P(w4?=the∣a?red?fox)=0.03
P
(
w
4
=
r
e
d
∣
a
?
r
e
d
?
f
o
x
)
=
0.03
P(w_4=red|a\ red\ fox)=0.03
P(w4?=red∣a?red?fox)=0.03
P
(
w
4
=
f
o
x
∣
a
?
r
e
d
?
f
o
x
)
=
0.02
P(w_4=fox|a\ red\ fox)=0.02
P(w4?=fox∣a?red?fox)=0.02
P
(
w
4
=
d
o
g
∣
a
?
r
e
d
?
f
o
x
)
=
0.11
P(w_4=dog|a\ red\ fox)=0.11
P(w4?=dog∣a?red?fox)=0.11
P
(
w
4
=
.
∣
a
?
r
e
d
?
f
o
x
)
=
0.79
P(w_4=.|a\ red\ fox)=0.79
P(w4?=.∣a?red?fox)=0.79
所以 P ( a ? r e d ? f o x . ) = P ( a ) ? P ( r e d ∣ a ) ? P ( f o x ∣ a ? r e d ) ? P ( . ∣ a ? r e d ? f o x ) = 0.4 ? 0.27 ? 0.55 ? 0.79 = 0.0469 P(a\ red\ fox.)=P(a)*P(red|a)*P(fox|a\ red)*P(.|a \ red\ fox)=0.4*0.27*0.55*0.79=0.0469 P(a?red?fox.)=P(a)?P(red∣a)?P(fox∣a?red)?P(.∣a?red?fox)=0.4?0.27?0.55?0.79=0.0469
此时,可以看到生成的这句话的概率为0.0469。我们是否可以直接比较这句话的概率与当前语言模型生成的其他句子的概率,来判定生成句子的好坏呢?答案是否定的,因为句子的最终概率是单词概率连乘得到的,所以随着句子长度的增加,概率会越来越小。所以我们想要找一个不受句子长度影响的衡量方式。
考虑到句子的概率是连乘得到的,所以这个问题可以通过计算几何平均来解决。此时,我们将利用句子中单词的数量
n
n
n来对句子概率进行归一化:
P
n
o
r
m
(
W
)
=
P
(
W
)
n
P_{norm}(W)=\sqrt [n] {P(W)}
Pnorm?(W)=nP(W)?
此时,a red fox. 这句话的归一化概率为
P
n
o
r
m
(
a
?
r
e
d
?
f
o
x
.
)
=
P
(
a
?
r
e
d
?
f
o
x
.
)
4
=
P
(
a
?
r
e
d
?
f
o
x
.
)
1
/
4
=
0.465
P_{norm}(a\ red\ fox.)=\sqrt [4] {P(a\ red\ fox.)}=P(a\ red\ fox.)^{1/4}=0.465
Pnorm?(a?red?fox.)=4P(a?red?fox.)?=P(a?red?fox.)1/4=0.465
现在,所有的概率都被归一化了,可以比较不同长度句子的概率了。
进一步地,困惑度这个概念被提出来,他是归一化概率的倒数。即:
P e r p l e x i t y = 1 P n o r m ( W ) = 1 P ( W ) 1 n = 1 P ( w ) 1 n Perplexity = \frac{1}{P_{norm}(W)}=\frac{1}{P(W)^\frac{1}{n}}={\frac{1}{P(w)}^{\frac{1}{n}}} Perplexity=Pnorm?(W)1?=P(W)n1?1?=P(w)1?n1?
因为是概率的倒数,所以困惑度越低,句子概率越高,语言模型就越好。