https://deepmind.com/research/publications/Visual-Grounding-in-Video-for-Unsupervised-Word-Translation
Multilingual concept mapping with visual base as a hint
You can train pairs between different languages. Can handle languages with a small dataset Text-based translation techniques can also be useful during initialization.
Narration in multiple languages is used as a data set for one cooking process. This makes it possible to imitate the learning method of a multilingual child who learns multiple languages from visual hints.
Matching between multiple languages such as French and English using Wikipedia. Score from dissimilarity
Our method is an existing word mapping technique that addresses many of the drawbacks of text-based methods. .. Being unsupervised learning. Unsupervised learning It is unsupervised learning in that it can be learned without a corpus between the two languages. Let's learn based on Visual Domain Z.
You can learn from YouTube videos. It is more effective when faced with various corpora.
Recommended Posts