Turning Words into Numbers

As Katherine Kinnaird and I continue our work on the Tedtalks, we have found ourselves drawn to examine more closely the notion of topics, which we both feel have been underexamined in their usage in the humanities.

Most humanists use an implementation of LDA, which we will probably also use simply to stay in parallel, but at some point in our work, frustrated with my ability to get LDA to work within Python, I picked up Alan Riddell’s DARIAH tutorial and drafted an implementation of NMF topic modeling for our corpus. One advantage I noticed right away, in comparing the results to earlier work I had done with Jonathan Goodwin, was what seemed like a much more stable set of word clusters in the algorithmically-derived topics.

Okay, good, but Kinnaird noticed that stopwords kept creeping into the topics and that raised larger issues about how NMF does what it does and that meant, because she’s so thorough, backing up a bit and making sure we understand how NMF works.

What follows is an experiment to understand the shape and nature of the tf matrix, the tfidf matrix, and the output of the sklearn NMF algorithm. Some of this is driven by the following essays:

To start our adventure, we needed a small set of texts with sufficient overlap that we could later successfully derive topics from them. I set myself the task of creating ten sentences, each of approximately ten words. Careful readers who take the time to read the sentences themselves will, I hope, forgive me for the texts being rather reflexive in nature, but it did seem appropriate given the overall reflexive nature of this task.

# =-=-=-=-=-=-=-=-=-=-=
# The Toy Corpus
# =-=-=-=-=-=-=-=-=-=-= 

sentences = ["Each of these sentences consists of about ten words.",
             "Ten sentence stories and ten word stories were once popular.",
             "Limiting the vocabulary to ten words is difficult.",
             "It is quite difficult to create sentences of just ten words",
             "I need, in fact, variety in the words used.",
             "With all these texts being about texts, there will be few topics.",
             "But I do not want too much variety in the vocabulary.",
             "I want to keep the total vocabulary fairly small.",
             "With a small vocabulary comes a small matrix.",
             "The smaller the matrix the more we will be able to see how things work."]


# =-=-=-=-=-=-=-=-=-=-=
# The Stopwords for this corpus
# =-=-=-=-=-=-=-=-=-=-= 

stopwords = ["a", "about", "all", "and", "be", "being", "but", "do", "each", "few", 
             "how", "i", "in", "is", "it", "more", "much", "not", "of", "once", "the", 
             "there", "these", "to", "too", "want", "we", "were", "will", "with"]

Each text is simply a sentence in a list of strings. Below the texts is the custom stopword list for this corpus. For those curious, there are a total of 102 tokens in the corpus and 30 stopwords. Once the stopwords are applied, 49 tokens remain for a total of 31 words.

# =-=-=-=-=-=
# Clean & Tokenize
# =-=-=-=-=-=

import re
from nltk.tokenize import WhitespaceTokenizer

tokenizer = WhitespaceTokenizer()
# stopwords = re.split('\s+', open('../data/tt_stop.txt', 'r').read().lower())

# Loop to tokenize, stop, and stem (if needed) texts.
tokenized = []
for sentence in sentences:   
    raw = re.sub(r"[^\w\d'\s]+",'', sentence).lower()
    tokens = tokenizer.tokenize(raw)
    stopped_tokens = [word for word in tokens if not word in stopwords]
    tokenized.append(stopped_tokens)


# =-=-=-=-=-=-=-=-=-=-=
# Re-Assemble Texts as Strings from Lists of Words
# (because this is what sklearn expects)
# =-=-=-=-=-=-=-=-=-=-= 

texts = []
for item in tokenized:
    the_string = ' '.join(item)
    texts.append(the_string)
for text in texts:
    print(text)
sentences consists ten words
ten sentence stories ten word stories popular
limiting vocabulary ten words difficult
quite difficult create sentences just ten words
need fact variety words used
texts texts topics
variety vocabulary
keep total vocabulary fairly small
small vocabulary comes small matrix
smaller matrix able see things work
all_words = ' '.join(texts).split()
print("There are {} tokens representing {} words."
      .format(len(all_words), len(set(all_words))))
There are 49 tokens representing 31 words.

We will explore below the possibility of using the sklearn module’s built-in tokenization and stopword abilities, but while I continue to teach myself that functionality, we can move ahead with understanding the vectorization of a corpus.

There are a lot of ways to turn a series of words into a series of numbers. One of the principle ways of doing so ignores any individuated context for a particular word as we might understand it within the context of a given sentence but simply considers a word in relationship to other words in a text. That is, one way to turn words into numbers is simply to count the words in a text, reducing a text to what is known as a “bag of words.” (There’s a lot of linguistics and information science that validates this approach, but it will always chafe most humanists.)

If we run our corpus of ten sentences through the CountVectorizer, we will get a representation of it as a series of numbers, each representing the count of a particular word within a particular text:

# =-=-=-=-=-=-=-=-=-=-=
# TF
# =-=-=-=-=-=-=-=-=-=-= 
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer

vec = CountVectorizer()
tf_data = vec.fit_transform(texts).toarray()
print(tf_data.shape)
print(tf_data)
(10, 31)
[[0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 1 0]
 [0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 2 2 0 0 0 0 0 0 0 1 0 0]
 [0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 1 0]
 [0 0 0 1 1 0 0 1 0 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 1 0]
 [0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 1 0]
 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 0 1 0 0 0 0 0 0 0]
 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0]
 [0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 1 0 0 0]
 [0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 2 0 0 0 0 0 0 0 0 0 1 0 0 0]
 [1 0 0 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 1]]

The term frequency vectorizer in sklearn creates a set of words out of all the tokens, like we did above, then counts the number of times a given word occurs within a given text, returning that text as a vector. Thus, the second sentence above:

"Ten sentence stories and ten word stories were once popular." 

which we had tokenized and stopworded to become:

ten sentence stories ten word stories popular

becomes a list of numbers, or a vector, that looks like this:

0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 2 2 0 0 0 0 0 0 0 1 0 0

I chose the second sentence because it has two words that occur twice, ten and stories, so that it didn’t look like a line of binary. If you stack all ten texts on top of each other, then you get a matrix that of 10 rows, each row a text, and 31 columns, each column one of the important, lexical, words.

Based on the location of the two twos, my guess is that the CountVectorizer alphabetizes its list of words, which can also be considered as features of a text. A quick check of our set of words, sorted alphabetically is our first step in confirmation. (It also reveals one of the great problems of working with words: “sentence” and “sentences” as well as “word” and “words” are treated separately and so where a human being would regard those as two lexical entries, the computer treats it as four. This is one argument for stemming, but stemming, so far as I have encountered is, is not only no panacea, it also creates other problems.)

the_words = list(set(all_words))
the_words.sort()
print(the_words)
['able', 'comes', 'consists', 'create', 'difficult', 'fact', 'fairly', 'just', 'keep', 'limiting', 'matrix', 'need', 'popular', 'quite', 'see', 'sentence', 'sentences', 'small', 'smaller', 'stories', 'ten', 'texts', 'things', 'topics', 'total', 'used', 'variety', 'vocabulary', 'word', 'words', 'work']

We can actually get that same list from the vectorizer itself with the get_feature_names method:

features = vec.get_feature_names()
print(features)
['able', 'comes', 'consists', 'create', 'difficult', 'fact', 'fairly', 'just', 'keep', 'limiting', 'matrix', 'need', 'popular', 'quite', 'see', 'sentence', 'sentences', 'small', 'smaller', 'stories', 'ten', 'texts', 'things', 'topics', 'total', 'used', 'variety', 'vocabulary', 'word', 'words', 'work']

We can actually get the count for each term with the vocabulary_ method, which reveals that sklearn stores the information as a dictionary with the term as the key and the count as the value:

occurrences = vec.vocabulary_
print(occurrences)
{'comes': 1, 'difficult': 4, 'need': 11, 'matrix': 10, 'vocabulary': 27, 'just': 7, 'see': 14, 'quite': 13, 'smaller': 18, 'consists': 2, 'texts': 21, 'variety': 26, 'sentence': 15, 'total': 24, 'popular': 12, 'create': 3, 'work': 30, 'topics': 23, 'word': 28, 'limiting': 9, 'words': 29, 'ten': 20, 'able': 0, 'keep': 8, 'sentences': 16, 'fairly': 6, 'stories': 19, 'things': 22, 'used': 25, 'fact': 5, 'small': 17}

It’s also worth pointing out that we can get a count of particular terms within our corpus by feeding the CountVectorizer a vocabulary argument. Here I’ve prepopulated a list with three of our terms — “sentence”, “stories”, and “vocabulary” — and the function returns an array which counts only the occurrence of those three terms across all ten texts:

# =-=-=-=-=-=-=-=-=-=-=
# Controlled Vocabulary Count
# =-=-=-=-=-=-=-=-=-=-= 

tags = ['sentence', 'stories', 'vocabulary']
cv = CountVectorizer(vocabulary=tags)
data = cv.fit_transform(texts).toarray()
print(data)
[[0 0 0]
 [1 2 0]
 [0 0 1]
 [0 0 0]
 [0 0 0]
 [0 0 0]
 [0 0 1]
 [0 0 1]
 [0 0 1]
 [0 0 0]]

So far we’ve been trafficking in raw counts, or occurrences, of a word — aka term, aka feature — in our corpus. Chances are, longer, or bigger, texts which simply have more words will have more of any given word, which means they may come to be overvalued (overweighted?) if we rely only on occurrences. Fortunately, we can simply normalize by length of a text to get a value that can be used to compare how often a word is used in relationship to the size of the text across all texts in a corpus. That is, we can get a term’s frequency.

As I was working on this bit of code, I learned that sklearn stores this information in a compressed sparse row matrix, wherein a series of (text, term) coordinates are followed by a value. I have captured the first two texts below. (Note the commented out toarray method in the second-to-last line. It’s there so often in sklearn code that I had come to take it for granted.)

from sklearn.feature_extraction.text import TfidfTransformer

tf_transformer = TfidfTransformer(use_idf=False).fit(tf_data)
words_tf = tf_transformer.transform(tf_data)#.toarray()
print(words_tf[0:2])
  (0, 2)    0.5
  (0, 16)   0.5
  (0, 20)   0.5
  (0, 29)   0.5
  (1, 12)   0.301511344578
  (1, 15)   0.301511344578
  (1, 19)   0.603022689156
  (1, 20)   0.603022689156
  (1, 28)   0.301511344578

And here’s that same information represented as an array:

words_tf_array = words_tf.toarray()
print(words_tf_array[0:2])
[[ 0.          0.          0.5         0.          0.          0.          0.
   0.          0.          0.          0.          0.          0.          0.
   0.          0.          0.5         0.          0.          0.          0.5
   0.          0.          0.          0.          0.          0.          0.
   0.          0.5         0.        ]
 [ 0.          0.          0.          0.          0.          0.          0.
   0.          0.          0.          0.          0.          0.30151134
   0.          0.          0.30151134  0.          0.          0.
   0.60302269  0.60302269  0.          0.          0.          0.          0.
   0.          0.          0.30151134  0.          0.        ]]

Finally, we can also weight words within a document contra the number of times they occur within the overall corpus, thus lowering the value of common words.

# =-=-=-=-=-=-=-=-=-=-=
# TFIDF
# =-=-=-=-=-=-=-=-=-=-= 

tfidf = TfidfVectorizer()
tfidf_data = tfidf.fit_transform(texts)#.toarray()
print(tfidf_data.shape)
print(tfidf_data[1]) # values for second sentence
(10, 31)
  (0, 12)   0.338083066465
  (0, 28)   0.338083066465
  (0, 19)   0.67616613293
  (0, 15)   0.338083066465
  (0, 20)   0.447100526936

And now, again, in the more common form of an array:

tfidf_array = tfidf_data.toarray()
print(tfidf_array[1]) # values for second sentence
[ 0.          0.          0.          0.          0.          0.          0.
  0.          0.          0.          0.          0.          0.33808307
  0.          0.          0.33808307  0.          0.          0.
  0.67616613  0.44710053  0.          0.          0.          0.          0.
  0.          0.          0.33808307  0.          0.        ]
#tfidf_recall = tfidf_data.get_feature_names() # Not working

Staying within the sklearn ecosystem

What if we do all tokenization and normalization in sklearn?

import numpy as np
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer

# This is the bog-standard version from the documentation
# test_vec = CountVectorizer(input=u'content', 
#                            encoding=u'utf-8', 
#                            decode_error=u'strict', 
#                            strip_accents=None, 
#                            lowercase=True, 
#                            preprocessor=None, 
#                            tokenizer=None, 
#                            stop_words=stopwords, 
#                            token_pattern=u'(?u)\b\w\w+\b', 
#                            ngram_range=(1, 1), 
#                            analyzer=u'word', 
#                            max_df=1.0, 
#                            min_df=1, 
#                            max_features=None, 
#                            vocabulary=None, 
#                            binary=False, 
#                            dtype=<type 'numpy.int64'>)
test_vec = CountVectorizer(lowercase = True, 
                           stop_words = stopwords, 
                           token_pattern = u'(?u)\b\w\w+\b', 
                           ngram_range = (1, 1), 
                           analyzer = u'word')

#test_data = test_vec.fit_transform(texts).toarray() # --> ValueError: empty vocabulary

Counting Control Words in a Text

As I was working on a toy corpus to understand the various facets of skearn, I came across this very clear example of how to count specific words in a collection of texts:

import sklearn
cv = sklearn.feature_extraction.text.CountVectorizer(vocabulary=['hot', 'cold', 'old'])
data = cv.fit_transform(['pease porridge hot', 'pease porridge cold', 'pease porridge in the pot', 'nine days old']).toarray()
print(data)
[[1 0 0]
 [0 1 0]
 [0 0 0]
 [0 0 1]]

Please note that I’ve changed the original a bit to make it easier to deploy this is a longer script.

Getting Word Frequencies for 2000+ Texts

What I’ve been working on for the past few days is in preparation for attempting a topic model using the more established LDA instead of the NMF to see how well they compare — with the understanding that since there is rarely a one-to-one matchup within either method, that there will be no such match across them.

Because LDA does not filter out common words on its own, the way the NMF method does, you have to start with a stoplist. I know we can begin with Blei’s and a few other established lists, but I would also like to be able to compare that against our own results. My first thought was to build a dictionary of words and their frequency within the corpus. For convenience sake, I am using the NLTK.

Just as a record of what I’ve done, here’s the usual code for loading the talks from the CSV with everything in it:

import pandas
import re

# Get all talks in a list &amp; then into one string
colnames = [&#039;author&#039;, &#039;title&#039;, &#039;date&#039; , &#039;length&#039;, &#039;text&#039;]
df = pandas.read_csv(&#039;../data/talks-v1b.csv&#039;, names=colnames)
talks = df.text.tolist()
alltalks = &quot; &quot;.join(str(item) for item in talks) # Solves pbm of floats in talks

# Clean out all punctuation except apostrophes
all_words = re.sub(r&quot;[^\w\d&#039;\s]+&quot;,&#039;&#039;,alltalks).lower()

We still need to identify which talks have floats for values and determine what impact, if any, it has on the project.

import nltk

tt_tokens = nltk.word_tokenize(all_words)

tt_freq = {}
for word in tt_tokens:
    try:
        tt_freq[word] += 1
    except: 
        tt_freq[word] = 1

Using this method, the dictionary has 63426 entries. Most of those are going to be single-entry items or named entities, but I do think it’s worth looking at them, as well as the high-frequency words that may not be a part of established stopword lists: I think it will be important to note those words which are specifically common to TED Talks.

I converted the dictionary to a list of tuples in order to be able to sort — I see that there is a way to sort a dictionary in Python, but this is a way I know. Looking at the most common words, I see NLTK didn’t get rid of punctuation: I cleared this up by removing punctuation earlier in the process, keeping the contractions (words with apostrophes), which the NLTK does not respect.

N.B. I tried doing this simply with a regex expression that split on white spaces, but I am still seeing contractions split into different words.

tt_freq_list.sort(reverse=True)
tt_freq_list[0:20]

[(210294, &#039;the&#039;),
 (151163, &#039;and&#039;),
 (126887, &#039;to&#039;),
 (116155, &#039;of&#039;),
 (106547, &#039;a&#039;),
 (96375, &#039;that&#039;),
 (83740, &#039;i&#039;),
 (78986, &#039;in&#039;),
 (75643, &#039;it&#039;),
 (71766, &#039;you&#039;),
 (68573, &#039;we&#039;),
 (65295, &#039;is&#039;),
 (56535, &quot;&#039;s&quot;),
 (49889, &#039;this&#039;),
 (37525, &#039;so&#039;),
 (33424, &#039;they&#039;),
 (32231, &#039;was&#039;),
 (30067, &#039;for&#039;),
 (28869, &#039;are&#039;),
 (28245, &#039;have&#039;)]

Keeping the apostrophes proved to be harder than I thought — and I tried going a “pure Python” route and splitting only on white spaces, trying both of the following:

word_list = re.split(&#039;\s+&#039;, all_words)
word_list = all_words.split()

I still got: (56535, "'s"),. (The good news is that the counts match.)

Okay, good news. The NLTK white space tokenizer works:

from nltk.tokenize import WhitespaceTokenizer
white_words = WhitespaceTokenizer().tokenize(all_words)

I tried using Sci-Kit Learn’s CountVectorizer but it requires a list of strings, not one string, and it does not like that some of the texts are floats. So, we’ll save dealing with that when it comes to looking at this corpus as a corpus and not as one giant collection of words.

from sklearn.feature_extraction.text import CountVectorizer

count_vect = CountVectorizer()
word_counts = count_vect.fit_transform(talks)

ValueError: np.nan is an invalid document, expected byte or unicode string.

The final, working, script of the day produces the output we want:

&lt;br /&gt;# Tokenize on whitespace
from nltk.tokenize import WhitespaceTokenizer
tt_tokens = WhitespaceTokenizer().tokenize(all_words)

# Build a dictionary of words and their frequency in the corpus
tt_freq = {}
for word in tt_tokens:
    try:
        tt_freq[word] += 1
    except: 
        tt_freq[word] = 1

# Build a list of tuples, sort, and see some results 
tt_freq_list = [(val, key) for key, val in tt_freq.items()]
tt_freq_list.sort(reverse=True)
tt_freq_list[0:20]

More Sentiment Comparisons

I added two kinds of moving averages to the sentiments.py script, and as you can see from the results below, whether you go with the numpy version or the Technical Analysis library, talib, of the running average, you get the same results: NP starts its running average at the beginning of the window; TA at the end. Here, the window was 10% of the total sentence count, which was approximately 700 overall. I entered the following in Python:

my_file = "/Users/john/Code/texts/sentiment/mdg.txt"
smooth_plots(my_file, 70)

And here is the graph:

Moving/Running Averages

Moving/Running Averages

The entire script is available as a gh.

Next step: NORMALIZATION!