The following paper is discussed in this article.
Causal Reasoning and Large Language Models: Opening a New Frontier for Causality. Kıcıman and Ness et al. (2023)
Causality — The automatic detection relationship between cause and effect. For illustration, consider two concepts that are discussed in some text, say medical text. Let the concepts be A, a symptom and B a disease. Then in that text fragment whether A implies B or B implies A ? How do we know that, this discovery is called Causal Reasoning for two objects. There may be a case in some text that both hold, though such cases can be rare. This makes in for a graph, where nodes are these concepts and edges are the extracted relationships. This is performed by creating a pair for each edge and using LLM to determine the direction of the edge. This is the predicted direction of causality.
It has been concluded in the paper that: (as per )
– LLMs perform better than state-of-the-art algorithms in graph theory and counterfactual inference in Causality relation discovery.
– LLMs can help humans to solve the Causality relation discovery.
– Evaluations were performed by authors on several datasets.
– Casual relationship prediction has been improved with the improvements in LLMs capabilities.
– Currently before LLMs there were two kinds of casualty detection mechanisms that were used- (1) Logic-based Causality, where logic-based computations are performed and (2) Graph-based Causality, where Graph-based concepts are used. All this is performed after the input is divided into questions which are iterated till the right relationships are extracted.
– LLMs help in by putting in metadata in the analysis and discovery of Causal relations. This kind of reasoning as per authors happens when human experts make graphs for Causality computation.
– As per authors, the LLMs fill in domain knowledge which was earlier filled in by humans.
– This is how LLMs can make edges in the Graph to find relationships.
– LLM-based knowledge discovery outperforms other algorithms.
– On Tubingen cause-effect pairs dataset GPT-4 outperforms all other datasets with 96% accuracy.
– On Neuropathic pain dataset too GPT performed better than traditional techniques with 96.2% accuracy. Gpt-3.5-turbo performed very lower than GPT 4 attaining 85.5% accuracy.
– LLMs can infer edges, for full Graph discovery as well.
– Arctic sea ice dataset was also evaluated.
– Causal discovery can also be affected by memorized information on seen information.
– Conclusion: It is highly useful to use LLM for causality detection. This may be used in future on and in many domains. Hence this is a highly beneficial paper.
Causality has many real-life applications, some of which are as follows:
1. Medical Applications.
a. Here, diagnosis can be made efficient with automatic causality detection.
b. Conclusions on medicines for different kinds of patients can be made.
c. Can help in drug discovery.
d. Can help in allergy identification.
2. Legal Applications.
a. Solving cases automatically or partially with human help.
b. Identification of cause and effects or both.
c. Automation of legal systems.
3. Knowledge Discovery
a. In solving historic topics of confusion.
b. In solving archaeological problems.
c. In solving explanations in topics from the origin of the universe, to mention a few.
 Kıcıman, E., Ness, R., Sharma, A., & Tan, C. (2023). Causal Reasoning and Large Language Models: Opening a New Frontier for Causality. arXiv preprint arXiv:2305.00050.