基于python的gensim word2vec训练词向量

cenylon 2018-09-05

准备工作

当我们下载了anaconda后,可以在命令窗口通过命令

conda install gensim
1

安装gensim

gensim介绍

gensim是一款强大的自然语言处理工具,里面包括N多常见模型,我们体验一下:

interfaces – Core gensim interfaces
utils – Various utility functions
matutils – Math utils
corpora.bleicorpus – Corpus in Blei’s LDA-C format
corpora.dictionary – Construct word<->id mappings
corpora.hashdictionary – Construct word<->id mappings
corpora.lowcorpus – Corpus in List-of-Words format
corpora.mmcorpus – Corpus in Matrix Market format
corpora.svmlightcorpus – Corpus in SVMlight format
corpora.wikicorpus – Corpus from a Wikipedia dump
corpora.textcorpus – Building corpora with dictionaries
corpora.ucicorpus – Corpus in UCI bag-of-words format
corpora.indexedcorpus – Random access to corpus documents
models.ldamodel – Latent Dirichlet Allocation
models.ldamulticore – parallelized Latent Dirichlet Allocation
models.ldamallet – Latent Dirichlet Allocation via Mallet
models.lsimodel – Latent Semantic Indexing
models.tfidfmodel – TF-IDF model
models.rpmodel – Random Projections
models.hdpmodel – Hierarchical Dirichlet Process
models.logentropy_model – LogEntropy model
models.lsi_dispatcher – Dispatcher for distributed LSI
models.lsi_worker – Worker for distributed LSI
models.lda_dispatcher – Dispatcher for distributed LDA
models.lda_worker – Worker for distributed LDA
models.word2vec – Deep learning with word2vec
models.doc2vec – Deep learning with paragraph2vec
models.dtmmodel – Dynamic Topic Models (DTM) and Dynamic Influence Models (DIM)
models.phrases – Phrase (collocation) detection
similarities.docsim – Document similarity queries
How It Works
simserver – Document similarity server
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32

我们可以看到:

- 基本的语料处理工具

- LSI

- LDA

- HDP

- DTM

- DIM

- TF-IDF

- word2vec、paragraph2vec

以后用上其他模型的时候再介绍,今天我们来体验:

word2vec

#encoding=utf-8
from gensim.models import word2vec
sentences=word2vec.Text8Corpus(u'分词后的爽肤水评论.txt')
model=word2vec.Word2Vec(sentences, size=50)
y2=model.similarity(u"好", u"还行")
print(y2)
for i in model.most_similar(u"滋润"):
 print i[0],i[1]
1
2
3
4
5
6
7
8
9
10
11

txt文件是已经分好词的5W条评论,训练模型只需一句话:

model=word2vec.Word2Vec(sentences,min_count=5,size=50)
1

第一个参数是训练语料,第二个参数是小于该数的单词会被剔除,默认值为5,

第三个参数是神经网络的隐藏层单元数,默认为100

model.similarity(u"好", u"还行")#计算两个词之间的余弦距离
model.most_similar(u"滋润")#计算余弦距离最接近“滋润”的10个词
1
2
3

运行结果:

0.642981583608
保湿 0.995047152042
温和 0.985100984573
高 0.978088200092
舒服 0.969187200069
补水 0.967649161816
清爽 0.960570812225
水水 0.958645284176
一般 0.928643763065
一款 0.911774456501
真的 0.90943980217
1
2
3
4
5
6
7
8
9
10
11
12

效果不错吧,虽然只有5W条评论的语料

当然还可以存储和加载咱们辛辛苦苦训练好的模型:

model.save('/model/word2vec_model')
new_model=gensim.models.Word2Vec.load('/model/word2vec_model')
1
2
3

也可以获取每个词的词向量

model['computer'] 
1

训练词向量时传入的两个参数也对训练效果有很大影响,需要根据语料来决定参数的选择,好的词向量对NLP的分类、聚类、相似度判别等任务有重要意义哦!

基于python的gensim word2vec训练词向量

相关推荐