Meet ClarifyDelphi: An Interactive System That Elicits Context From So …

Context is the most important element of effective communication. It shapes the meaning of the words so that they are correctly heard and interpreted by the listener. Context is important as it informs the speaker and the listener how important to consider something, what inferences to make about what is being communicated, and most significantly, it specifies the meaning behind the message. When making moral decisions and using common sense and moral reasoning in social circumstances and acts, context is equally crucial.

The previous model, called Delphi, models moral judgments of individuals in a variety of everyday situations, but it lacks the necessary knowledge of the surrounding context. To overcome the limitation of Delphi, the team of researchers has proposed CLARIFYDELPHI, which is an interactive system that learns to clarify statements in order to draw salient context out of a situation and enhance moral judgments, as a solution to this problem. The authors have mentioned that this model asks questions like ‘Why did you lie to your friend?’ to obtain the missing context.

According to the authors, the most instructive questions are those that could result in answers that lead to moral judgments that differ. In other words, it shows that context is very important in determining moral judgment if different responses to a question lead to varied moral assessments. To achieve this, the team has created a reinforcement learning framework with a defeasibility reward. This framework maximizes the divergence between moral judgments associated with hypothetical answers to a question. The authors have suggested that Proximal Policy Optimization (PPO) can be used to optimize the generation of questions that obtain responses with context.

Upon evaluation, CLARIFYDELPHI outperforms other baseline methods for generating clarification questions by giving more relevant, informative, and defeasible questions. The questions that CLARIFYDELPHI generates have meaningful conclusions, demonstrating the efficiency of their method for obtaining crucial contextual data. The authors have also quantified the amount of supervised clarification question training data required for a good initial policy and have demonstrated that questions contribute to generating defeasible updates.

The contributions of the team can be summarized as follows –

The team has proposed a Reinforcement Learning-based technique that defines defeasibility as a new form of relevance for clarification questions in order to introduce the task of clarification question generation for social and moral situations.

The team has publicly released δ-CLARIFY, which is a dataset of 33k crowdsourced clarification questions.

It has also released δ-CLARIFYsilver, which contains generated questions conditioned on a defeasible inference dataset.

The trained models, along with their codes, can be accessed.

The adaptability of human moral reasoning involves defining when a moral rule should apply and recognizing legitimate exceptions in light of contextual requirements. CLARIFYDELPHI creates queries that reveal context that is missing and allows for more accurate moral judgments. Compared to the other approaches, ClarifyDelphi generates more questions, leading to either weakening or strengthening answers.

Consequently, CLARIFYDELPHI seems promising and an incredible model for generating informative and relevant questions that are capable of revealing diverging moral judgments.

Check Out The Paper and Github link. Don’t forget to join our 22k+ ML SubReddit, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more. If you have any questions regarding the above article or if we missed anything, feel free to email us at

Check Out 100’s AI Tools in AI Tools Club
The post Meet ClarifyDelphi: An Interactive System That Elicits Context From Social And Moral Situations Using Reinforcement Learning appeared first on MarkTechPost.