NLTK

怎么使用NLTK库进行文本分类

小亿
96
2024-05-11 17:27:55
栏目: 编程语言

NLTK(Natural Language Toolkit)是一个用于自然语言处理的Python库,可以用于文本分类等任务。以下是使用NLTK库进行文本分类的基本步骤:

  1. 导入NLTK库:
import nltk
  1. 下载NLTK所需的数据:
nltk.download('punkt')
nltk.download('averaged_perceptron_tagger')
nltk.download('stopwords')
  1. 准备文本数据:
# 示例文本数据
documents = [
    ("This is a good movie", "positive"),
    ("I like this movie", "positive"),
    ("I hate this movie", "negative"),
    ("This is the worst movie ever", "negative")
]
  1. 特征提取:
def document_features(document):
    document_words = set(document)
    features = {}
    for word in word_features:
        features['contains({})'.format(word)] = (word in document_words)
    return features

all_words = nltk.FreqDist(w.lower() for w in nltk.word_tokenize(text) if w.isalpha())
word_features = list(all_words.keys())[:100]
featuresets = [(document_features(d), c) for (d,c) in documents]
  1. 划分训练集和测试集:
train_set, test_set = featuresets[:3], featuresets[3:]
  1. 训练分类器:
classifier = nltk.NaiveBayesClassifier.train(train_set)
  1. 对测试集进行分类预测:
print(nltk.classify.accuracy(classifier, test_set))

通过以上步骤,你可以使用NLTK库进行文本分类任务,并得到分类准确率。你也可以尝试使用其他分类器,如SVM、决策树等,来得到更好的分类结果。

0
看了该问题的人还看了