如何用python安装woord2vector
gensim 用conda install gensim 与pip install gensim 安装是不同的 提示C编译器会更快,windows下装了 MinGW 中文wiki处理 gensim模块中有专门处理wiki语料的函数 中文分词还是用的jieba 因为wiki百科有繁体,简繁体转换用了 还有最开始程序运行有问题,发现了自己python的一个坏习惯,应该把程序写成函数 [python] view plain copy if __name__ == '__main__': my_function() 这样子python import这个文件就不会发生问题 [python] view plain copy# -*- coding: utf-8 -*- from gensim.corpora import WikiCorpus import jieba from langconv import * _author__ = 'Lust' # read the wiki.xml.bz2 # transform it to simplified Chinese (use langconv) # Chinese text segmentation(use jieba) # save it as txt def my_function(): space = " " i = 0 l = [] a = '..//data//zhwiki-latest-pages-articles.xml.bz2' f = open('..//data//reduce_zhiwiki.txt', 'w') wiki = WikiCorpus(a, lemmatize=False, dictionary={}) # texts = wiki.get_texts() for text in wiki.get_texts(): for temp_sentence in text: temp_sentence = Converter('zh-hans').convert(temp_sentence.decode('utf-8')) temp_sentence = temp_sentence.encode('utf-8') seg_list = list(jieba.cut(temp_sentence)) # for temp_term in temp_sentence: for temp_term in seg_list: l.append(temp_term.encode('utf-8')) f.write(space.join(l) + "\n") l = [] i = i + 1 print "Saved " + str(i) + " articles" # limit number of wikis if (i == 100): break f.close() if __name__ == '__main__': my_function() gensim中的word2vector 超级简单,一个函数的事情。
唯一要注意的是workers=multiprocessing.cpu_count()-4,如果不-4,win10会蓝屏,因为CPU总是100%,把电脑累蓝了?[python] view plain copy# -*- coding: utf-8 -*- from gensim.models import Word2Vec from gensim.models.word2vec import LineSentence import multiprocessing _author__ = 'Lust' # read the txt # word2vec it # save it as model and vector def my_function(): a = open('..//data//zhiwiki_news.txt', 'r') f_1 = open('..//result//zhiwiki_news.model', 'w') f_2 = open('..//result//zhiwiki_news.vector', 'w') model = Word2Vec(LineSentence(a), size=400, window=5, min_count=5, workers=multiprocessing.cpu_count()-4) model.save(f_1) model.save_word2vec_format(f_2, binary=False) if __name__ == '__main__': my_function() 使用训练好的模型 [python] view plain copy# -*- coding: utf-8 -*- import gensim _author__ = 'Lust' # read the news # Chinese text segmentation(use jieba) # add it to zhiwiki_news def my_function(): model = gensim.models.Word2Vec.load_word2vec_format("wiki.en.text.vector", binary=False) model.most_similar("man") model.similarity("woman", "girl") if __name__ == '__main__': my_function()
转载请注明出处51数据库 » python word vector