摘 要:当前的词嵌入模型多数基于分布假设理论, 这类模型将单词作为最基本语义单元, 然后利用词的外部上下文信息学习词表示. 然而, 在类似于汉语的语言中,单词经常由多个字符组成, 这些字符包含了丰富的内部信息, 同时单词的语义也和这些字符的语义息息相关. 考虑到当前常用词模型均忽略了字符信息,本文以中文为例, 提出单词与字符协同学习模型. 并且, 为了解决汉语中存在单字符多语义和多字符单语义的情况, 本文提出基于多语义字符与单词协同学习模型和多字符单语义选择方法. 最后,使用词相似任务和类比推理任务对提出的新模型进行评估, 结果显示本文提出的模型均优于其他词嵌入模型.
Abstract: Most word embedding models are based on the theory of distribution hypothesis, which take a word as a basic unit and infer word representation from its external contexts. However, in some languages similar to Chinese, a word is built from several characters and these characters contains rich internal information. The semantic of a word is closely related to the semantic of its composing characters. Therefore, this paper take Chinese for example and present two model to collaborative learn word and character representation. In order to solve the phenomenon of homonymy and polysemy, multiple-prototype character embeddings and an word selection method are proposed. We evaluate the proposed models on similarity tasks and analogy tasks. The results demonstrates the proposed models outperform other baseline models.