In this article I discuss an research paper by (Adrian Sanborn and Jacek Skryzalin)  named “Deep Learning for Semantic Similarity”.
Aim: Given two sentences or small text fragments, are they similar? If so, how much similar or dis-similar?
The authors have proposed use of AI technique of deep learning in particular –Recurrent Neural Networks and Recursive Neural Networks. Recurrent Neural Network use the previous states in a learning mechanism. The model learned here is a non-linear function of previous states plus the new inputs. The semantic similarity model works by learning two set of words, one for each sentence. This is the learning or model building part. Deep Neural Networks require a considerable sized training data, each word here is represented by its word embedding. While in Recursive Neural Network based semantic similarity, a binary tree is fed into the model, the tree being the parse tree of the sentence. The results obtained by the research were comprehended by authors as well in comparison to the constraints in the experimentations performed. Further, the similarity scores have been classified into six categories.
Semantic similarity can be used in various applications as suggested by authors as well. Once such a technique is well developed is becomes handy to compute the similarity between two comments on twitter, LinkedIn, Facebook or any social media platform. It can be used as a statistics called “statistics for comments” and can be helpful for both social media businesses and individuals too, especially those who gets lot of comments and want to get statistics of their comments, not just number of likes and dislikes.
 Sanborn, A., & Skryzalin, J. (2015). Deep learning for semantic similarity. CS224d: Deep Learning for Natural Language Processing Stanford, CA, USA: Stanford University.