自然语言处理| NLTK库的详解

2019-08-29 11:02:28 浏览数 (1)

自然语言处理(NLP)

自然语言处理(natural language processing)是计算机科学领域与人工智能领域中的一个重要方向。它研究能实现人与计算机之间用自然语言进行有效通信的各种理论和方法。自然语言处理是一门融语言学、计算机科学、数学于一体的科学。

自然语言处理应用

  • 搜索引擎,比如谷歌,雅虎等等。谷歌等搜索引擎会通过NLP了解到你是一个科技发烧友,所以它会返回科技相关的结果。
  • 社交网站信息流,比如 Facebook 的信息流。新闻馈送算法通过自然语言处理了解到你的兴趣,并向你展示相关的广告以及消息,而不是一些无关的信息。
  • 语音助手,诸如苹果 Siri。
  • 垃圾邮件程序,比如 Google 的垃圾邮件过滤程序 ,这不仅仅是通常会用到的普通的垃圾邮件过滤,现在,垃圾邮件过滤器会对电子邮件的内容进行分析,看看该邮件是否是垃圾邮件。

NLTK

NLTK是构建Python程序以使用人类语言数据的领先平台。它为50多种语料库和词汇资源(如WordNet)提供了易于使用的界面,还提供了一套用于分类,标记化,词干化,标记,解析和语义推理的文本处理库。NLTK是Python上著名的⾃然语⾔处理库 ⾃带语料库,具有词性分类库 ⾃带分类,分词,等等功能。NLTK被称为“使用Python进行教学和计算语言学工作的绝佳工具”,以及“用自然语言进行游戏的神奇图书馆”。

安装语料库

代码语言:javascript复制
pip install nltk

注意,这只是安装好了一个框子,里面是没东西的

代码语言:javascript复制
# 新建一个ipython,输入
import nltk 
nltk.download()

我觉得下book 和popular下好就可以了

功能⼀览表

安装好了,我们来愉快的玩耍

了解Tokenize

把长句⼦拆成有“意义”的⼩部件,,使用的是nltk.word_tokenize

代码语言:javascript复制
>>> import nltk
>>> sentence = "hello,,world"
>>> tokens = nltk.word_tokenize(sentence)
>>> tokens
['hello', ',', ',world']

标记文本

代码语言:javascript复制
>>> import nltk
>>> sentence = """At eight o'clock on Thursday morning
... Arthur didn't feel very good."""
>>> tokens = nltk.word_tokenize(sentence)
>>> tokens
['At', 'eight', "o'clock", 'on', 'Thursday', 'morning',
'Arthur', 'did', "n't", 'feel', 'very', 'good', '.']
>>> tagged = nltk.pos_tag(tokens)  # 标记词性
>>> tagged[0:6]
[('At', 'IN'), ('eight', 'CD'), ("o'clock", 'JJ'), ('on', 'IN'),
('Thursday', 'NNP'), ('morning', 'NN')]

加载内置语料库

分词(注意只能分英语)

代码语言:javascript复制
>>> from nltk.tokenize import word_tokenize 
>>> from nltk.text import Text
>>> input_str = "Today's weather is good, very windy and sunny, we have no classes in the afternoon,We have to play basketball tomorrow."
>>> tokens = word_tokenize(input_str)
>>> tokens[:5]
['Today', "'s", 'weather', 'is', 'good']
>>> tokens = [word.lower() for word in tokens] #小写
>>> tokens[:5]
['today', "'s", 'weather', 'is', 'good']

查看对应单词的位置和个数

代码语言:javascript复制
>>> t = Text(tokens)
>>> t.count('good')
1
>>> t.index('good')
4

还可以画图

代码语言:javascript复制
t.plot(8)

停用词

代码语言:javascript复制
from nltk.corpus import stopwords
stopwords.fileids() # 具体的语言
代码语言:javascript复制
###  果然没有中文
['arabic', 'azerbaijani', 'danish', 'dutch', 'english', 'finnish', 'french', 'german', 'greek',
 'hungarian', 'italian', 'kazakh', 'nepali', 'norwegian', 'portuguese', 'romanian', 'russian',
  'spanish', 'swedish', 'turkish']
  ```

stopwords.raw('english').replace('n',' ') #会有很多n,这里替换

代码语言:javascript复制
"i me my myself we our ours ourselves you you're you've you'll you'd your yours yourself yourselves he him his himself she she's her hers herself it it's its itself they them their theirs themselves what which who whom this that that'll these those am is are was were be been being have has had having do does did doing a an the and but if or because as until while of at by for with about against between into through during before after above below to from up down in out on off over under again further then once here there when where why how all any both each few more most other some such no nor not only own same so than too very s t can will just don don't should should've now d ll m o re ve y ain aren aren't couldn couldn't didn didn't doesn doesn't hadn hadn't hasn hasn't haven haven't isn isn't ma mightn mightn't mustn mustn't needn needn't shan shan't shouldn shouldn't wasn wasn't weren weren't won won't wouldn wouldn't "

具体使用

代码语言:javascript复制
test_words = [word.lower() for word in tokens] #  tokens是上面的句子
test_words_set = set(test_words) # 集合
test_words_set.intersection(set(stopwords.words('english')))
>>>{'and', 'have', 'in', 'is', 'no', 'the', 'to', 'very', 'we'}

在 "Today's weather is good, very windy and sunny, we have no classes in the afternoon,We have to play basketball tomorrow."中有这么多个停用词 'and', 'have', 'in', 'is', 'no', 'the', 'to', 'very', 'we'

过滤停用词

代码语言:javascript复制
filtered = [w for w in test_words_set if(w not in stopwords.words('english'))]
filtered
代码语言:javascript复制
['today',
 'good',
 'windy',
 'sunny',
 'afternoon',
 'play',
 'basketball',
 'tomorrow',
 'weather',
 'classes',
 ',',
 '.',
 "'s"]

词性标注

代码语言:javascript复制
from nltk import pos_tag
tags = pos_tag(tokens)
tags
代码语言:javascript复制
[('Today', 'NN'),
 ("'s", 'POS'),
 ('weather', 'NN'),
 ('is', 'VBZ'),
 ('good', 'JJ'),
 (',', ','),
 ('very', 'RB'),
 ('windy', 'JJ'),
 ('and', 'CC'),
 ('sunny', 'JJ'),
 (',', ','),
 ('we', 'PRP'),
 ('have', 'VBP'),
 ('no', 'DT'),
 ('classes', 'NNS'),
 ('in', 'IN'),
 ('the', 'DT'),
 ('afternoon', 'NN'),
 (',', ','),
 ('We', 'PRP'),
 ('have', 'VBP'),
 ('to', 'TO'),
 ('play', 'VB'),
 ('basketball', 'NN'),
 ('tomorrow', 'NN'),
 ('.', '.')]

词性表

分块

代码语言:javascript复制
from nltk.chunk import RegexpParser
sentence = [('the','DT'),('little','JJ'),('yellow','JJ'),('dog','NN'),('died','VBD')]
grammer = "MY_NP: {<DT>?<JJ>*<NN>}"
cp = nltk.RegexpParser(grammer) #生成规则
result = cp.parse(sentence) #进行分块
print(result)
out:
result.draw() #调用matplotlib库画出来

命名实体识别

命名实体识别是NLP里的一项很基础的任务,就是指从文本中识别出命名性指称项,为关系抽取等任务做铺垫。狭义上,是识别出人命、地名和组织机构名这三类命名实体(时间、货币名称等构成规律明显的实体类型可以用正则表达式等方式识别)。当然,在特定的领域中,会相应地定义领域内的各种实体类型。

代码语言:javascript复制
from nltk import ne_chunk
sentence = "Edison went to Tsinghua University today."
print(ne_chunk(pos_tag(word_tokenize(sentence))))
代码语言:javascript复制
(S
  (PERSON Edison/NNP)
  went/VBD
  to/TO
  (ORGANIZATION Tsinghua/NNP University/NNP)
  today/NN
  ./.)

0 人点赞