Researchers from the University of Manchester Introduce MentalLLaMA: T …

PTSD and other mental health issues have an impact on public health globally. Due to stigma, many individuals do not promptly seek psychiatric assistance, which can have catastrophic repercussions. Social media has ingrained itself into people’s daily lives due to the advancement of online technology1. Social media texts are a great source for mental health analysis and possibly early intervention since many people with probable mental health disorders use sites like Twitter and Reddit to convey negative emotions and express stress. However, the exponentially rising volume of social media messages makes manual analysis of posts unfeasible. As a result, numerous studies use natural language processing (NLP) approaches to analyze social media for mental health automatically. 

Prior approaches to natural language processing (NLP) for mental health generally modeled social media analysis of mental health as text classification problems, where discriminative domain-specific pre-trained language models (PLMs) attained cutting-edge performance. One of their major drawbacks is that these algorithms provide predictions in a black-box manner with little interpretability, which considerably reduces their dependability in actual usage. Recent studies assessed the effectiveness of the newest large language models (LLMs), including ChatGPT2 and LLaMA, in identifying numerous mental health illnesses and providing in-depth justifications for their choices using Chain-of-Thought (CoT) approaches. They also conducted extensive human tests to demonstrate that ChatGPT can produce explanations for its right classifications comparable to those provided by humans, demonstrating its potential to improve the readability of mental health analysis. 

However, ChatGPT currently fails to match the performance of state-of-the-art supervised algorithms in a zero-shot or few-shot learning environment, which restricts its application in real-world situations. A practical method is to align foundation LLMs with the target domain by fine-tuning them on a limited amount of data. The development of LLMs for understandable mental health analysis faces two major obstacles. First, good training data are necessary for optimizing LLMs. Although several datasets for the investigation of mental health on social media contain brief extracts of casual content, open-source data that offers thorough and trustworthy justifications for detection findings is still lacking. The delicate study subject and the high expense of explanations written by subject-matter specialists are the major causes of this. 

Second, only a few open-source LLMs for usable interpretable mental health analyses have been made available to the general public. However, prompting or fine-tuning close-source LLMs like ChatGPT may be quite expensive3. The growth of the relevant research community needs to be improved by the high cost and lack of resources. They created the first multi-task and multisource Interpretable Mental Health Instruction (IMHI) dataset with 105K data samples to allow the tweaking and assessment of LLM instruction to close these gaps. First, they gather training data from 10 existing sources, covering 8 tasks, such as binary mental health detection tasks, multi-class mental health detection tasks, mental health cause/factor detection tasks, and mental risk and wellness factors detection tasks. 

Figure 1 shows a few instances of MentalLLaMA’s performance in various tasks, including mental health analysis. It also summarize the training data and underlying models of MentalLLaMA.

Social media posts and the labels that go with them are included in the data gathered. Second, customers get a thorough justification for every label with annotations. They employ expert-written few-shot questions and the gathered labels to inspire ChatGPT and elicit explanations from its replies, drawing inspiration from self-instruct’s success. They undertake automatic assessments of all acquired data to guarantee the explanations’ quality further. During these evaluations, the accuracy of the predictions, the correspondence between the labels and the explanations, and the explanations’ overall quality are assessed. With a well-crafted annotation strategy from subject-matter experts, they also conduct human evaluations for some of the gathered data. 

Thirdly, they utilize a rule-based approach to convert all gathered social media posts, labels, and explanations into instruction-based query-answer pairs. These are then used to create the IMHI dataset’s training data and assessment benchmark. Researchers from the University of Manchester introduce MentalLLaMA, the first open-source LLM series for interpretable mental health analysis with instruction-following capacity, based on the IMHI dataset. The LLaMA2 foundation models serve as the basis for training MentalLLaMA models. They specifically tweak the MentalLLaMA-7B, MentalLLaMA-chat-7B, and MentalLLaMA-chat-13B models. Figure 1 displays a few instances of MentalLLaMA’s excellent capabilities. 

Additionally, they thoroughly assess how well MentalLLaMA models perform against the IMHI assessment standard. They assess MentalLLaMA’s predictive accuracy by contrasting their classification results with cutting-edge discriminative techniques and other generative language models. According to the findings, MentalLLaMA-chat-13B performs more accurately than or on par with state-of-the-art levels on seven out of ten test sets. They assess the caliber of the explanations that are generated as well. The outcomes demonstrate that instruction tailoring, reinforcement learning from human feedback (RLHF), and growing model sizes improve the explanation creation quality. 

They created the first multi-task and multisource instruction-tuning dataset for interpretable mental health analysis on social media, the Interpretable Mental Health Instruction (IMHI) dataset with 105K samples. • They suggest MentalLLaMA, the first instruction-following large language model that is open-source and able to do interpretable analyses of mental health. MentalLLaMA may use social media data to undertake mental health analysis, and it can produce compelling justifications for its conclusions. • With 19K test samples, which include 8 tasks and 10 test sets, they present the first comprehensive assessment standard for understandable mental health analysis. On this benchmark, they contrast MentalLLaMA with currently used techniques. Results and analysis show that MentalLLaMA is superior, and future work will focus on improving LLMs for comprehensible mental health analysis.

Check out the Paper and Github. All Credit For This Research Goes To the Researchers on This Project. Also, don’t forget to join our 31k+ ML SubReddit, 40k+ Facebook Community, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.

If you like our work, you will love our newsletter..

We are also on WhatsApp. Join our AI Channel on Whatsapp..
The post Researchers from the University of Manchester Introduce MentalLLaMA: The First Open-Source LLM Series for Readable Mental Health Analysis with Capacity of Instruction Following appeared first on MarkTechPost.

<