Rick and Morty and Tidy Data Principles (Part 2)

R
Rick and Morty
NLP
Searching keywords and patterns in the TV show.
Author

Mauricio “Pachá” Vargas S.

Published

October 22, 2017

Updated 2022-05-28: I moved the blog to Quarto, so I had to update the paths. I am also not using pacman and I am loading libraries in the classic way now.

Motivation

The first part left an open door to analyze Rick and Morty contents using tf-idf, bag-of-words or some other NLP techniques. Here I’m also taking a lot of ideas from Julia Silge’s blog.

Note: If some images appear too small on your screen you can open them in a new tab to show them in their original size.

Term Frequency

The most basic measure in natural language processing is obviously to just count words. This is a crude way of knowing what a document is about. The problem with counting words, however, is that there are some words (called stopwords) that are always too common, like “the” or “that”. So to create a more meaningful representation what people usually do is to compare the word counts observed in a document with that of a larger body of text.

Tf-idf is the frequency of a term adjusted for how rarely it is used. It is intended to measure how important a word is to a document in a collection (or corpus) of documents.

The inverse document frequency for any given term is defined as: \[idf(\text{term}) = \ln{\left(\frac{n_{\text{documents}}}{n_{\text{documents containing term}}}\right)}\] We can use tidy data principles to approach tf-idf analysis and use consistent, effective tools to quantify how important various terms are in a document that is part of a collection.

What do Rick and Morty say?

Let’s start by looking at Rick and Morty dialogues and examine first term frequency, then tf-idf. I’ll analyze this removing stopwords beforehand.

library(data.table)
library(tidyr)
library(tidytext)
library(dplyr)
library(ggplot2)
library(viridis)
library(ggstance)
library(widyr)

rick_and_morty_subs <- as_tibble(fread("../../../10/13/rick-and-morty-tidy-data-1/rick_and_morty_subs.csv")) %>%
  mutate(text = iconv(text, to = "ASCII")) %>%
  drop_na()

rick_and_morty_subs_tidy <- rick_and_morty_subs %>%
  unnest_tokens(word,text) %>%
  anti_join(stop_words) %>%
  count(season, word, sort = TRUE)

total_words <- rick_and_morty_subs_tidy %>% group_by(season) %>% summarize(total = sum(n))
season_words <- left_join(rick_and_morty_subs_tidy, total_words)

season_words
# A tibble: 11,838 × 4
   season word       n total
   <chr>  <chr>  <int> <int>
 1 S01    morty   1332 19299
 2 S01    rick    1178 19299
 3 S01    jerry    499 19299
 4 S03    morty    330 12847
 5 S03    rick     249 12847
 6 S02    rick     242 12503
 7 S01    summer   233 19299
 8 S01    beth     230 19299
 9 S02    morty    228 12503
10 S01    yeah     209 19299
# ℹ 11,828 more rows

Let’s look at the distribution of n/total for each season, the number of times a word appears in a season divided by the total number of terms (words) in that season. This is term frequency!

ggplot(season_words, aes(n/total, fill = season)) +
  geom_histogram(alpha = 0.8, show.legend = FALSE) +
  xlim(0, 0.001) +
  labs(title = "Term Frequency Distribution in Rick and Morty' Seasons",
       y = "Count") +
  facet_wrap(~season, nrow = 3, scales = "free_y") +
  theme_minimal(base_size = 13) +
  scale_fill_viridis(end = 0.85, discrete = TRUE) +
  theme(strip.text = element_text(hjust = 0)) +
  theme(strip.text = element_text(face = "italic"))

There are very long tails to the right for these dialogues because of the extremely common words. These plots exhibit similar distributions for each season, with many words that occur rarely and fewer words that occur frequently. The idea of tf-idf is to find the important words for the content of each document by decreasing the weight for commonly used words and increasing the weight for words that are not used very much in a collection or corpus of documents, in this case, the group of Rick and Morty’ seasons as a whole. Calculating tf-idf attempts to find the words that are important (i.e., common) in a text, but not too common. Let’s do that now.

season_words <- season_words %>%
  bind_tf_idf(word, season, n)

season_words
# A tibble: 11,838 × 7
   season word       n total     tf   idf tf_idf
   <chr>  <chr>  <int> <int>  <dbl> <dbl>  <dbl>
 1 S01    morty   1332 19299 0.0690     0      0
 2 S01    rick    1178 19299 0.0610     0      0
 3 S01    jerry    499 19299 0.0259     0      0
 4 S03    morty    330 12847 0.0257     0      0
 5 S03    rick     249 12847 0.0194     0      0
 6 S02    rick     242 12503 0.0194     0      0
 7 S01    summer   233 19299 0.0121     0      0
 8 S01    beth     230 19299 0.0119     0      0
 9 S02    morty    228 12503 0.0182     0      0
10 S01    yeah     209 19299 0.0108     0      0
# ℹ 11,828 more rows

Notice that idf and thus tf-idf are zero for the extremely common words after removing stopwords. These are all words that appear all the time on every chapter, so the idf term (which will then be the natural log of 1) is zero, and “Rick” and “Morty” are examples of this. The inverse document frequency (and thus tf-idf) is very low (near zero) for words that occur in many of the documents in a collection; this is how this approach decreases the weight for common words. The inverse document frequency will be a higher number for words that occur in fewer of the documents in the collection. Let’s look at terms with high tf-idf.

season_words %>%
  select(-total) %>%
  arrange(desc(tf_idf))
# A tibble: 11,838 × 6
   season word            n      tf   idf  tf_idf
   <chr>  <chr>       <int>   <dbl> <dbl>   <dbl>
 1 S03    pickle         43 0.00335  1.10 0.00368
 2 S02    unity          32 0.00256  1.10 0.00281
 3 S01    meeseeks       40 0.00207  1.10 0.00228
 4 S03    vindicators    26 0.00202  1.10 0.00222
 5 S02    purge          25 0.00200  1.10 0.00220
 6 S03    tommy          20 0.00156  1.10 0.00171
 7 S01    crystals       30 0.00155  1.10 0.00171
 8 S02    deer           19 0.00152  1.10 0.00167
 9 S02    jan            15 0.00120  1.10 0.00132
10 S02    roy            15 0.00120  1.10 0.00132
# ℹ 11,828 more rows

Curious about “pickle”? You’d better watch Picle Rick episode if you don’t get why “pickle” is the highest tf-idf ranked term. “Vindicator” is another term that is concentrated in one episode where Vindicators appear. There’s even an episode where flu is a part of the central problem and Rick has to use his mind to try to solve a flu of of control because of his inventions.

Some of the values for idf are the same for different terms because there are 3 documents in this corpus and we are seeing the numerical value for ln(3/1), ln(3/2), etc. Let’s look at a visualization for these high tf-idf words.

plot_tfidf <- season_words %>%
  arrange(desc(tf_idf)) %>%
  mutate(word = factor(word, levels = rev(unique(word))))

ggplot(plot_tfidf[1:20, ], aes(tf_idf, word, fill = season, alpha = tf_idf)) +
  geom_barh(stat = "identity") +
  labs(title = "Highest tf-idf words in Rick and Morty' Seasons",
       y = NULL, x = "tf-idf") +
  theme_minimal(base_size = 13) +
  scale_alpha_continuous(range = c(0.6, 1), guide = FALSE) +
  scale_x_continuous(expand = c(0, 0)) +
  scale_fill_viridis(end = 0.85, discrete = TRUE) +
  theme(legend.title = element_blank()) +
  theme(legend.justification = c(1, 0),
  legend.position = c(1, 0))

Let’s look at the seasons individually.

plot_tfidf <- plot_tfidf %>%
  group_by(season) %>%
  top_n(15) %>%
  ungroup()

ggplot(plot_tfidf, aes(tf_idf, word, fill = season, alpha = tf_idf)) +
  geom_barh(stat = "identity", show.legend = FALSE) +
  labs(title = "Highest tf-idf words in Rick and Morty' Seasons",
       y = NULL, x = "tf-idf") +
  facet_wrap(~season, nrow = 3, scales = "free") +
  theme_minimal(base_size = 13) +
  scale_alpha_continuous(range = c(0.6, 1)) +
  scale_x_continuous(expand = c(0,0)) +
  scale_fill_viridis(end = 0.85, discrete = TRUE) +
  theme(strip.text = element_text(hjust = 0)) +
  theme(strip.text = element_text(face = "italic"))