In human conversations, you can talk by knowing that "Twitter" and "Facebook" are SNS, and even if you say "Yamaha", "Yamaha" is the motorcycle "Yamaha" or the piano "Yamaha" in the context. "I understand. This is because the knowledge information related to the background of the word is available. As a method of connecting this word with knowledge information, a method called entity linking is often used in natural language processing in recent years.
If you want to check immediately using the code, please install it below.
code:
Required data: --Data you want to analyze
The following use cases can be considered when actually using this.
1: Suggestion If a word related to the word appears when searching for a keyword, the search will be easier and it will be beneficial for the user.
2: Dialogue interface Since the sentences issued in the dialogue are short, there is little information. In order to provide advanced answers from this little information, it is essential to link not only words but also related knowledge.
3: Information extraction from Twitter Since there is little information on Twitter tweets, it is difficult to use it for useful information extraction just by using simple keywords. By associating keywords with related knowledge, it is possible to acquire useful information that could not be acquired by keyword matching.
ACL is a technique that has attracted a lot of attention even in the top cofunence of natural language processing.
The method and method of connecting simple keywords and knowledge may be a link containing detailed information or summary information including supplementary information. There are two important points.
1: Extract only keywords that are considered important from the text 2: Connect information related to keywords
Words that are generally considered important can be extracted by keyword matching using Wikificatation, but if you want to simply operate Can be tried by extracting only proper nouns with mecab. Originally, it is necessary to put a model of machine learning to judge whether this keyword is useful or not, but I will not mention it in this article. If you want to know more details, please see the following materials
Entity linking utilizing knowledge base
Another technique is to simply match the keyword with WikiPedia or DBPedia and use the matched linked information.
If the matching is a vector space, various operations can be performed by calculation, and the usage is widened. This is made possible by the Japanese Wikipedia Entity Vector introduced this time.
Since it is an advanced method of Word2Vec, it can be applied if you know the contents, so if you want to know the details, I recommend you to read the paper. Even if you do not read it, the data after vector calculation has already been prepared, so if you do not have time, you can use it.
Multiple named entity labels for Wikipedia articles
It's very simple and easy to understand.
1: Divide WikiPedia data into words with mecab etc. 2: Replace the word with the hyperlink in WikiPedia with the title of the link destination 3: If a word with a hyperlink appears again in WikiPedia, it is considered that the hyperlink is not attached, so the same processing as in 2 is performed. 4: Learn with Word2Vec based on the obtained word group.
This makes it possible to extract named entities as words and associate them with real-world entities (entity linking).
The above is an example of system implementation. Try to get the related word from the word using the wikipedia entity vector. The code is posted on github, so I will quote only the important parts.
1: Collect Twitter tweets (this time using Rinna's data) 2: Do the word-separation 3: Extract unique expressions with Wikification 4: Japanese Wikipedia Add vector with entity vector 5: Calculate the cosine similarity and give the one with high similarity as a synonym.
The code is simple so see below.
https://github.com/SnowMasaya/WikiPedia_Entity_Vector_Get_Similarity_word
I am calculating the cosine similarity, but since the calculation cost is high, I will try to speed up this part.
Since it is difficult to use OpenBlas for speeding up, I will write a commentary. When installing on Mac, install with the following command.
brew install openblas
Below, I will specify in which directory the openblas library is located.
[openblas]
libraries = openblas
library_dirs = /usr/local/opt/openblas/lib
include_dirs = /usr/local/opt/openblas/include
The effect is unknown because it is calculated by Cython type, but first check if the memory layout of the vector that calculates the cosine similarity is C style with the following code.
The reason for this check is that Blas performs copy processing when the vector memory layout is C style, but even in the case of Fortran style, it speeds up except for unnecessary processing by not performing the same processing. Because.
def __force_forder(self, x):
"""
Converts array x to fortran order Returns a tuple in the form (x is transposed)
:param x(vector):
:return:
"""
if x.flags.c_contiguous:
return (x.T, True)
else:
return (x, False)
Next, the inner product of the vectors is calculated with the following code. After checking the vector type, if it is C type, it is clearly stated that there is conversion processing. If the vector type is Fortran, conversion processing is not required and the calculation can be performed at high speed.
def __faster_dot(self, A, B):
"""
Use blas libraries directory to perform dot product
Reference:
https://www.huyng.com/posts/faster-numpy-dot-product
http://stackoverflow.com/questions/9478791/is-there-an-enhanced-numpy-scipy-dot-method
:param A(mat): vector
:param B(mat): vector
:return:
"""
A, trans_a = self.__force_forder(A)
B, trans_b = self.__force_forder(B)
return FB.dgemm(alpha=1.0, a=A, b=B, trans_a=trans_a, trans_b=trans_b)
Calculation of cosine similarity is also a bottleneck, but since the number of registered words in WikiPediaEntityVector is large, it takes a lot of time to perform the same process many times.
Since Python basically runs in a single process, I tried to speed it up by implementing thread-based parallel processing.
I am using the Producer Consumer pattern with Queue. In this case, the processing of Consumer is heavy, so we tried to speed up by increasing the number of threads given to Consumer. The size of the consumer is set, and threads are created for that size to operate.
for index in range(args.consumer_size):
multi_thread_consumer_crawl_instance = threading.Thread(target=producerConsumer.consumer_run, name=consumer_name + str(index))
multi_thread_consumer_crawl_instance.start()
Original named entity: [Computational synonyms]
Looking at the following, you can see that words that are difficult to take with a simple keyword match but are highly relevant are taken.
'Akita': ['Nagano', 'Fukushima', 'Kochi', 'Iwate', 'Yamagata', 'Niigata', 'Aomori', 'Kumamoto', 'Morioka'],
'hundred': ['hundred', 'Ten', 'thousand'],
'Godzilla': ['Godzilla_(1954 movie)', 'Godzilla_(Fictitious monster)', 'Gamera'],
'3': ['4', '6', '5', '0', '7', '8', '9', '2', '1'],
'Red': ['purple', 'green', 'Green', 'vermilion', 'black', 'Red色', 'Blue', 'White', 'yellow', 'Indigo', 'Blue']
'Pig': ['Cow', 'sheep', 'Sheep', 'Chicken', 'Goat', 'chicken', '山sheep', 'pig', 'cow'],
'golf': ['Bowling'],
'bamboo': ['willow', 'Pine']
'5': ['4', '6', '0', '7', '3', '8', '9', '2', '1'],
'branch': ['Stem', 'leaf', 'branchは'],
'wood': ['Cedar', 'Oak', 'stump', '松のwood'],
'Hmm': ['Pen', 'Gyu'],
'student': ['student', '大student'],
'Mochi': ['Manju', 'sake bottle', 'Red rice', 'egg', 'Miki', 'Porridge', 'Azuki', 'dumpling'],
'Waist': ['buttocks', 'knee', 'heel', 'shoulder'],
'beard': ['口beard', 'Beard', '口Beard', 'beard', 'beard', 'hair', 'あごBeard'],
'Cat': ['Little bird', 'cat', '仔Cat', 'mouse or rat'],
'China': ['Taiwan', 'Korea', 'Korea', 'People's Republic of China'],
'two': ['Five', 'Two', 'Two', 'three'],
'yukata': ['yukata', 'Everyday wear', 'Pure white', 'Mourning clothes', 'kimono', 'tuxedo', 'Everyday wear', 'Pure white', 'Mourning clothes', 'kimono', 'tuxedo'],
'baseball': ['rugby'],
'hair': ['頭hair', '黒hair', '長hair', 'beard', 'hairの毛', '前hair', '金hair', 'hair型'],
'autumn': ['autumn', 'summer', 'spring', 'summer', 'spring'],
'Nara': ['Wakayama']
Named entities are represented by Wikification, so they depend on Wikipedia. The data knowledge space relies on WikiPedia. It is better not to use it if the industry is special or if there are many rare cases. In the Japanese Wikipedia entity vector, hyperlink words are represented by "<< word >>", so processing other than "<< >>" is required. It consumes a lot of memory. The calculation time is also very long. If the original named entity is 192 words, it will take about 3 hours to operate in a single thread in a single process, but since the same process is performed for each named entity, it will be faster if parallel distributed processing is performed.
Entity linking utilizing knowledge base
Yamada123, Ikuya, Hideaki Takeda, and Yoshiyasu Takefuji. "Enhancing Named Entity Recognition in Twitter Messages Using Entity Linking." ACL-IJCNLP 2015 (2015): 136.
Faster numpy dot product for multi-dimensional arrays
Is there an “enhanced” numpy/scipy dot method?
models.word2vec – Deep learning with word2vec
Recommended Posts