输入:data/local/train/text data/local/dict/lexicon.txt
输出:data/local/lm (含text.no_oov, word.counts, unigram.counts, word_map, 3gram-mincount/lm_unpruned.gz)
代码语言:javascript复制local/train_lms.sh || exit 1;
流程:
- text.no_oov 把data/local/train/text的文件名索引替换成<UNK>
- word.counts 统计text.no_oov中单词出现的个数,并按出现次数倒序
- unigram.counts 合并text.no_oov和dict/lexicon.txt后统计单词出现的个数,并按出现次数倒序
- word_map
通过get_word_map.pl生成word_map,输入unigram.counts中的单词并加入"<s>" "</s>" "<UNK>"
- train.gz 将text.no_oov经过word_map转换后压缩成train.gz
- train_lm.sh --arpa --lmtype 3gram-mincount $dir 生成的模型在data/local/lm/3gram-mincount/lm_unpruned.gz下
备注:
- arpa文件(lm_unpruned.gz)
一个arpa文件包含三部分内容:probability word(s) [backoff probability]
backoff prob