Skip to content Skip to footer

How to Use Zero-Shot Classification for Sentiment Analysis | by Aminata Kaba | Jan, 2024


Exploring mental well-being insights with zero-shot classification

Artwork by Vivian Peng — reposted with permission

Sentiment analysis is a powerful tool in natural language processing (NLP) for exploring public opinions and emotions in text. In the context of mental health, it can provide compelling insights into the holistic wellness of individuals. As a summer data science associate at The Rockefeller Foundation, I conducted a research project using NLP techniques to explore Reddit discussions on depression before and after the COVID-19 pandemic. In order to better understand gender-related taboos around mental health and depression, I chose to analyze the distinctions between posts made by men and women.

Different Types of Sentiment Analysis

Traditionally, sentiment analysis classifies the overall emotions expressed in a piece of text into three categories: positive, negative, or neutral. But what if you were interested in exploring emotions at a more granular level — such as anticipation, fear, sadness, anger, etc.

There are ways to do this using sentiment models that reference word libraries, like The NRC Emotion Lexicon, which associates texts with eight basic emotions (anger, fear, anticipation, trust, surprise, sadness, joy, and disgust). However, the setup for this kind of analysis can be complicated, and the tradeoff may not be worth it.

I found that zero-shot classification can easily be used to produce similar results. The term “zero-shot” comes from the concept that a model can classify data with zero prior exposure to the labels it is asked to classify. This eliminates the need for a training dataset, which is often time-consuming and resource-intensive to create. The model uses its general understanding of the relationships between words, phrases, and concepts to assign them into various categories.

I was able to repurpose the use of zero-shot classification models for sentiment analysis by supplying emotions as labels to classify anticipation, anger, disgust, fear, joy, and trust.

In this post, I’ll share how to quickly get started with sentiment analysis using zero-shot classification in 5 easy steps.

Platforms like HuggingFace simplify the implementation of these models. You can explore different models and test out the results to find which one to use by:

  1. Go to https://huggingface.co
  2. Click on the “Models” tab and select the type of NLP task you’re interested in
  3. Choose one of the model cards, and this will lead you to the model interface
  4. Pass in a string of text to see how the model performs

Here are a couple examples of how a sentiment analysis model performed compared to a zero-shot model.

Sentiment Analysis

These models classify text into negative, neutral, and positive categories.

You can see here that the nuance is quite limited and does not leave a lot of room for interpretation. Access to the model shown above can be found here to test or run it.

These types of models are best used when you are looking to get a general pulse on the sentiment—whether the text is leaning positively or negatively.

Zero-shot classification

These models classify text into any categories you want by inputting them as labels. Since I was looking at text around mental health, I included emotions as labels, including urgent, joy, sadness, fatigue, and anxiety.

You can see that with the zero-shot classification model, we can easily categorize the text into a more comprehensive representation of human emotions without needing any labeled data. The model can discern nuances and changes in emotions within the text by providing accuracy scores for each label. This is useful in mental health applications, where emotions often exist on a spectrum.

Now that I have identified that the zero-shot classification model is a better fit for my needs, I will walk through how to apply the model to a dataset.

Implementation of the Zero-Shot Model

Here are the requirements to run this example:

torch>=1.9.0
transformers>=4.11.3
datasets>=1.14.0
tokenizers>=0.11.0
pandas
numpy

Step 1. Import libraries used

In this example, I am using the DeBERTa-v3-base-mnli-fever-anli zero-shot classifier from Hugging Face.


# load hugging face library and model

from transformers import pipeline
classifier = pipeline("zero-shot-classification", model="MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli")

# load in pandas and numpy for data manipulation
import pandas as pd
import numpy as np

Pipeline is the function used to call in pre-trained models from HuggingFace. Here I am passing on two arguments. You can get the values for these arguments from the model card:

  • `task`: The type of task the model is performing, passed as a string
  • `model`: Name of the model you are using, passed as a string

Step 2. Read in your data

Your data can be in any form, as long as there is a text column where each row contains a string of text. To follow along with this example, you can read in the Reddit depression dataset here. This dataset is made available under the Public Domain Dedication and License v1.0.

#reading in data 
df = pd.read_csv("https://raw.githubusercontent.com/akaba09/redditmentalhealth/main/code/dep.csv")

Here is a preview of the dataset we’ll be using:

Step 3: Create a list of classes that you want to use for predicting sentiment

This list will be used as labels for the model to predict each piece of text. For example, is the text exploring emotions such as anger or disgust? In this case, I am passing a list of emotions as labels. You can use as many or as few labels as you’d like.

# Creating a list of emotions to use as labels
text_labels = ["anticipation", "anger", "disgust", "fear", "joy", "trust"]

Step 4: Run the model prediction on one piece of text first

Run the model on one piece of text first to understand what the model returns and how you want to shape it for your dataset.

# Sample piece of text
sample_text = "still have depression symptoms not as bad as they used to be in fact my therapist says im improving a lot but for the past years ive been stuck in this state of emotional numbness feeling disconnected from myself others and the world and time doesnt seem to be passing"

# Run the model on the sample text
classifier(sample_text, text_labels, multi_label = False)

The classifier function is part of the Transformers library in HuggingFace and calls in the model you want to use. In this example, we are using “DeBERTa-V4-base-mnli-fever-anli” and it takes three positional arguments:

  • First position: a piece of text in string format. his variable can have any name. In this example, I named it `sample_text`
  • Second position: list of labels you want to predict. This variable can have any name. In this example, I named it `text_labels`
  • Third position: `multi_label` takes a true or false argument. This determines whether each piece of text can have multiple labels or only one label per text. In this example, I am only interested in one label per text.

Here’s the output you get from the sample text:

#output 
# {'sequence': ' still have depression symptoms not as bad as they used to be in fact my therapist says im improving a lot but for the past years ive been stuck in this state of emotional numbness feeling disconnected from myself others and the world and time doesnt seem to be passing',
# 'labels': ['anticipation', 'trust', 'joy', 'disgust', 'fear', 'anger'],
# 'scores': [0.6039842963218689,
#0.1163715273141861,
#0.074860118329525,
#0.07247171550989151,
#0.0699692890048027,
#0.0623430535197258]}

The model returns a dictionary with the following keys and values”

  • “sequence”: The piece of text we passed in
  • “labels”: The list of labels for the model predictions in descending order of confidence.
  • “scores”: This returns a list of scores that represent the model’s confidence in its predictions in descending order. The order is correlated to the labels, so the first element in the scores list is reflective of the first element in the labels list. In this example, the model has predicted “anticipation” with a 0.604 confidence level.

Step 5: Write a custom function to make predictions on the entire dataset and include the labels as part of the dataframe
Seeing the structure of the dictionary output from the model, I can write a custom function to apply the predictions to all my data. In this example, I am only interested in keeping one sentiment for each piece of text. This function will take in your dataframe and return a new dataframe that includes two new columns—one for your sentiment label and one for the model score.

def predict_sentiment(df, text_column, text_labels):

"""
Predict the sentiment for a piece of text in a dataframe.

Args:
df (pandas.DataFrame): A DataFrame containing the text data to perform sentiment analysis on.
text_column (str): The name of the column in the DataFrame that contains the text data.
text_labels (list): A list of text labels for sentiment classification.

Returns:
pandas.DataFrame: A DataFrame containing the original data with additional columns for the predicted
sentiment label and corresponding score.

Raises:
ValueError: If the DataFrame (df) does not contain the specified text_column.

Example:
# Assuming df is a pandas DataFrame and text_labels is a list of text labels
result = predict_sentiment(df, "text_column_name", text_labels)
"""

result_list = []
for index, row in df.iterrows():
sequence_to_classify = row[text_column]
result = classifier(sequence_to_classify, text_labels, multi_label = False)
result['sentiment'] = result['labels'][0]
result['score'] = result['scores'][0]
result_list.append(result)
result_df = pd.DataFrame(result_list)[['sequence','sentiment', 'score']]
result_df = pd.merge(df, result_df, left_on = "text", right_on="sequence", how = "left")
return result_df

This function iterates over your dataframe and parses the dictionary result for each row. Since I am only interested in the sentiment with the highest score, I am selecting the first label by indexing it into the list with result[‘labels’][0]. If you want to take the top three sentiments, for example, you can update with a range result[‘labels’][0:3]. Similarly, if you want the top three scores, you can update with a range result[‘scores’][0:3].

Now you can run the function on your dataframe!

# run prediction on df

results_df = predict_sentiment(df=df, text_column ="text", text_labels= text_labels)

Here I pass in three arguments:

  • `df`: The name of your dataframe
  • `text_column`: The name of the column in the dataframe that contains text. Pass this argument as a string.
  • `text_labels`: A list of text labels for sentiment classification

This is a preview of what your returned data frame looks like:

For each piece of text, you can get the associated sentiment along with the model score.

Conclusion

Classic sentiment analysis models explore positive or negative sentiment in a piece of text, which can be limiting when you want to explore more nuance, like emotions, in the text.

While you can explore emotions with sentiment analysis models, it usually requires a labeled dataset and more effort to implement. Zero-shot classification models are versatile and can generalize across a broad array of sentiments without needing labeled data or prior training.

As we explored in this example, zero-shot models take in a list of labels and return the predictions for a piece of text. We passed in a list of emotions as our labels, and the results were pretty good considering the model wasn’t trained on this type of emotional data. This type of classification is a valuable tool in analyzing mental health-related text, which allows us to gain a more comprehensive understanding of the emotional landscape and contributes to improved support for mental well-being.

All images, unless otherwise noted, are by the author.



Source link