Turning Words into Numbers

As Katherine Kinnaird and I continue our work on the Tedtalks, we have found ourselves drawn to examine more closely the notion of topics, which we both feel have been underexamined in their usage in the humanities.

Most humanists use an implementation of LDA, which we will probably also use simply to stay in parallel, but at some point in our work, frustrated with my ability to get LDA to work within Python, I picked up Alan Riddell’s DARIAH tutorial and drafted an implementation of NMF topic modeling for our corpus. One advantage I noticed right away, in comparing the results to earlier work I had done with Jonathan Goodwin, was what seemed like a much more stable set of word clusters in the algorithmically-derived topics.

Okay, good, but Kinnaird noticed that stopwords kept creeping into the topics and that raised larger issues about how NMF does what it does and that meant, because she’s so thorough, backing up a bit and making sure we understand how NMF works.

What follows is an experiment to understand the shape and nature of the tf matrix, the tfidf matrix, and the output of the sklearn NMF algorithm. Some of this is driven by the following essays:

To start our adventure, we needed a small set of texts with sufficient overlap that we could later successfully derive topics from them. I set myself the task of creating ten sentences, each of approximately ten words. Careful readers who take the time to read the sentences themselves will, I hope, forgive me for the texts being rather reflexive in nature, but it did seem appropriate given the overall reflexive nature of this task.

# =-=-=-=-=-=-=-=-=-=-=
# The Toy Corpus
# =-=-=-=-=-=-=-=-=-=-= 

sentences = ["Each of these sentences consists of about ten words.",
             "Ten sentence stories and ten word stories were once popular.",
             "Limiting the vocabulary to ten words is difficult.",
             "It is quite difficult to create sentences of just ten words",
             "I need, in fact, variety in the words used.",
             "With all these texts being about texts, there will be few topics.",
             "But I do not want too much variety in the vocabulary.",
             "I want to keep the total vocabulary fairly small.",
             "With a small vocabulary comes a small matrix.",
             "The smaller the matrix the more we will be able to see how things work."]


# =-=-=-=-=-=-=-=-=-=-=
# The Stopwords for this corpus
# =-=-=-=-=-=-=-=-=-=-= 

stopwords = ["a", "about", "all", "and", "be", "being", "but", "do", "each", "few", 
             "how", "i", "in", "is", "it", "more", "much", "not", "of", "once", "the", 
             "there", "these", "to", "too", "want", "we", "were", "will", "with"]

Each text is simply a sentence in a list of strings. Below the texts is the custom stopword list for this corpus. For those curious, there are a total of 102 tokens in the corpus and 30 stopwords. Once the stopwords are applied, 49 tokens remain for a total of 31 words.

# =-=-=-=-=-=
# Clean & Tokenize
# =-=-=-=-=-=

import re
from nltk.tokenize import WhitespaceTokenizer

tokenizer = WhitespaceTokenizer()
# stopwords = re.split('\s+', open('../data/tt_stop.txt', 'r').read().lower())

# Loop to tokenize, stop, and stem (if needed) texts.
tokenized = []
for sentence in sentences:   
    raw = re.sub(r"[^\w\d'\s]+",'', sentence).lower()
    tokens = tokenizer.tokenize(raw)
    stopped_tokens = [word for word in tokens if not word in stopwords]
    tokenized.append(stopped_tokens)


# =-=-=-=-=-=-=-=-=-=-=
# Re-Assemble Texts as Strings from Lists of Words
# (because this is what sklearn expects)
# =-=-=-=-=-=-=-=-=-=-= 

texts = []
for item in tokenized:
    the_string = ' '.join(item)
    texts.append(the_string)
for text in texts:
    print(text)
sentences consists ten words
ten sentence stories ten word stories popular
limiting vocabulary ten words difficult
quite difficult create sentences just ten words
need fact variety words used
texts texts topics
variety vocabulary
keep total vocabulary fairly small
small vocabulary comes small matrix
smaller matrix able see things work
all_words = ' '.join(texts).split()
print("There are {} tokens representing {} words."
      .format(len(all_words), len(set(all_words))))
There are 49 tokens representing 31 words.

We will explore below the possibility of using the sklearn module’s built-in tokenization and stopword abilities, but while I continue to teach myself that functionality, we can move ahead with understanding the vectorization of a corpus.

There are a lot of ways to turn a series of words into a series of numbers. One of the principle ways of doing so ignores any individuated context for a particular word as we might understand it within the context of a given sentence but simply considers a word in relationship to other words in a text. That is, one way to turn words into numbers is simply to count the words in a text, reducing a text to what is known as a “bag of words.” (There’s a lot of linguistics and information science that validates this approach, but it will always chafe most humanists.)

If we run our corpus of ten sentences through the CountVectorizer, we will get a representation of it as a series of numbers, each representing the count of a particular word within a particular text:

# =-=-=-=-=-=-=-=-=-=-=
# TF
# =-=-=-=-=-=-=-=-=-=-= 
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer

vec = CountVectorizer()
tf_data = vec.fit_transform(texts).toarray()
print(tf_data.shape)
print(tf_data)
(10, 31)
[[0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 1 0]
 [0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 2 2 0 0 0 0 0 0 0 1 0 0]
 [0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 1 0]
 [0 0 0 1 1 0 0 1 0 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 1 0]
 [0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 1 0]
 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 0 1 0 0 0 0 0 0 0]
 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0]
 [0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 1 0 0 0]
 [0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 2 0 0 0 0 0 0 0 0 0 1 0 0 0]
 [1 0 0 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 1]]

The term frequency vectorizer in sklearn creates a set of words out of all the tokens, like we did above, then counts the number of times a given word occurs within a given text, returning that text as a vector. Thus, the second sentence above:

"Ten sentence stories and ten word stories were once popular." 

which we had tokenized and stopworded to become:

ten sentence stories ten word stories popular

becomes a list of numbers, or a vector, that looks like this:

0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 2 2 0 0 0 0 0 0 0 1 0 0

I chose the second sentence because it has two words that occur twice, ten and stories, so that it didn’t look like a line of binary. If you stack all ten texts on top of each other, then you get a matrix that of 10 rows, each row a text, and 31 columns, each column one of the important, lexical, words.

Based on the location of the two twos, my guess is that the CountVectorizer alphabetizes its list of words, which can also be considered as features of a text. A quick check of our set of words, sorted alphabetically is our first step in confirmation. (It also reveals one of the great problems of working with words: “sentence” and “sentences” as well as “word” and “words” are treated separately and so where a human being would regard those as two lexical entries, the computer treats it as four. This is one argument for stemming, but stemming, so far as I have encountered is, is not only no panacea, it also creates other problems.)

the_words = list(set(all_words))
the_words.sort()
print(the_words)
['able', 'comes', 'consists', 'create', 'difficult', 'fact', 'fairly', 'just', 'keep', 'limiting', 'matrix', 'need', 'popular', 'quite', 'see', 'sentence', 'sentences', 'small', 'smaller', 'stories', 'ten', 'texts', 'things', 'topics', 'total', 'used', 'variety', 'vocabulary', 'word', 'words', 'work']

We can actually get that same list from the vectorizer itself with the get_feature_names method:

features = vec.get_feature_names()
print(features)
['able', 'comes', 'consists', 'create', 'difficult', 'fact', 'fairly', 'just', 'keep', 'limiting', 'matrix', 'need', 'popular', 'quite', 'see', 'sentence', 'sentences', 'small', 'smaller', 'stories', 'ten', 'texts', 'things', 'topics', 'total', 'used', 'variety', 'vocabulary', 'word', 'words', 'work']

We can actually get the count for each term with the vocabulary_ method, which reveals that sklearn stores the information as a dictionary with the term as the key and the count as the value:

occurrences = vec.vocabulary_
print(occurrences)
{'comes': 1, 'difficult': 4, 'need': 11, 'matrix': 10, 'vocabulary': 27, 'just': 7, 'see': 14, 'quite': 13, 'smaller': 18, 'consists': 2, 'texts': 21, 'variety': 26, 'sentence': 15, 'total': 24, 'popular': 12, 'create': 3, 'work': 30, 'topics': 23, 'word': 28, 'limiting': 9, 'words': 29, 'ten': 20, 'able': 0, 'keep': 8, 'sentences': 16, 'fairly': 6, 'stories': 19, 'things': 22, 'used': 25, 'fact': 5, 'small': 17}

It’s also worth pointing out that we can get a count of particular terms within our corpus by feeding the CountVectorizer a vocabulary argument. Here I’ve prepopulated a list with three of our terms — “sentence”, “stories”, and “vocabulary” — and the function returns an array which counts only the occurrence of those three terms across all ten texts:

# =-=-=-=-=-=-=-=-=-=-=
# Controlled Vocabulary Count
# =-=-=-=-=-=-=-=-=-=-= 

tags = ['sentence', 'stories', 'vocabulary']
cv = CountVectorizer(vocabulary=tags)
data = cv.fit_transform(texts).toarray()
print(data)
[[0 0 0]
 [1 2 0]
 [0 0 1]
 [0 0 0]
 [0 0 0]
 [0 0 0]
 [0 0 1]
 [0 0 1]
 [0 0 1]
 [0 0 0]]

So far we’ve been trafficking in raw counts, or occurrences, of a word — aka term, aka feature — in our corpus. Chances are, longer, or bigger, texts which simply have more words will have more of any given word, which means they may come to be overvalued (overweighted?) if we rely only on occurrences. Fortunately, we can simply normalize by length of a text to get a value that can be used to compare how often a word is used in relationship to the size of the text across all texts in a corpus. That is, we can get a term’s frequency.

As I was working on this bit of code, I learned that sklearn stores this information in a compressed sparse row matrix, wherein a series of (text, term) coordinates are followed by a value. I have captured the first two texts below. (Note the commented out toarray method in the second-to-last line. It’s there so often in sklearn code that I had come to take it for granted.)

from sklearn.feature_extraction.text import TfidfTransformer

tf_transformer = TfidfTransformer(use_idf=False).fit(tf_data)
words_tf = tf_transformer.transform(tf_data)#.toarray()
print(words_tf[0:2])
  (0, 2)    0.5
  (0, 16)   0.5
  (0, 20)   0.5
  (0, 29)   0.5
  (1, 12)   0.301511344578
  (1, 15)   0.301511344578
  (1, 19)   0.603022689156
  (1, 20)   0.603022689156
  (1, 28)   0.301511344578

And here’s that same information represented as an array:

words_tf_array = words_tf.toarray()
print(words_tf_array[0:2])
[[ 0.          0.          0.5         0.          0.          0.          0.
   0.          0.          0.          0.          0.          0.          0.
   0.          0.          0.5         0.          0.          0.          0.5
   0.          0.          0.          0.          0.          0.          0.
   0.          0.5         0.        ]
 [ 0.          0.          0.          0.          0.          0.          0.
   0.          0.          0.          0.          0.          0.30151134
   0.          0.          0.30151134  0.          0.          0.
   0.60302269  0.60302269  0.          0.          0.          0.          0.
   0.          0.          0.30151134  0.          0.        ]]

Finally, we can also weight words within a document contra the number of times they occur within the overall corpus, thus lowering the value of common words.

# =-=-=-=-=-=-=-=-=-=-=
# TFIDF
# =-=-=-=-=-=-=-=-=-=-= 

tfidf = TfidfVectorizer()
tfidf_data = tfidf.fit_transform(texts)#.toarray()
print(tfidf_data.shape)
print(tfidf_data[1]) # values for second sentence
(10, 31)
  (0, 12)   0.338083066465
  (0, 28)   0.338083066465
  (0, 19)   0.67616613293
  (0, 15)   0.338083066465
  (0, 20)   0.447100526936

And now, again, in the more common form of an array:

tfidf_array = tfidf_data.toarray()
print(tfidf_array[1]) # values for second sentence
[ 0.          0.          0.          0.          0.          0.          0.
  0.          0.          0.          0.          0.          0.33808307
  0.          0.          0.33808307  0.          0.          0.
  0.67616613  0.44710053  0.          0.          0.          0.          0.
  0.          0.          0.33808307  0.          0.        ]
#tfidf_recall = tfidf_data.get_feature_names() # Not working

Staying within the sklearn ecosystem

What if we do all tokenization and normalization in sklearn?

import numpy as np
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer

# This is the bog-standard version from the documentation
# test_vec = CountVectorizer(input=u'content', 
#                            encoding=u'utf-8', 
#                            decode_error=u'strict', 
#                            strip_accents=None, 
#                            lowercase=True, 
#                            preprocessor=None, 
#                            tokenizer=None, 
#                            stop_words=stopwords, 
#                            token_pattern=u'(?u)\b\w\w+\b', 
#                            ngram_range=(1, 1), 
#                            analyzer=u'word', 
#                            max_df=1.0, 
#                            min_df=1, 
#                            max_features=None, 
#                            vocabulary=None, 
#                            binary=False, 
#                            dtype=<type 'numpy.int64'>)
test_vec = CountVectorizer(lowercase = True, 
                           stop_words = stopwords, 
                           token_pattern = u'(?u)\b\w\w+\b', 
                           ngram_range = (1, 1), 
                           analyzer = u'word')

#test_data = test_vec.fit_transform(texts).toarray() # --> ValueError: empty vocabulary

Counting Control Words in a Text

As I was working on a toy corpus to understand the various facets of skearn, I came across this very clear example of how to count specific words in a collection of texts:

import sklearn
cv = sklearn.feature_extraction.text.CountVectorizer(vocabulary=['hot', 'cold', 'old'])
data = cv.fit_transform(['pease porridge hot', 'pease porridge cold', 'pease porridge in the pot', 'nine days old']).toarray()
print(data)
[[1 0 0]
 [0 1 0]
 [0 0 0]
 [0 0 1]]

Please note that I’ve changed the original a bit to make it easier to deploy this is a longer script.

Travel Card Falderol

Notes for travel card.

  • If workflow has been implemented: scan each document.
  • Membership dues are not allowed.
  • If you’re out of office, check with agency policy.
  • Other employees cannot possess your card.
  • If a charge is declined, contact admin.
  • Program agreement is renewed annually.
  • Cash is not allowed.
  • There are no flexibilities in PPM49.
  • Incidentals, personal purchases are not allowed.
  • Gift cards, baggage fees, and food are allowed with prior approval.
  • Document everything.
  • If card used accidentally, contact admin.

Of Open Tabs and Persistent Concerns

I’ve left this logbook under-attended for a while now, and since I want to get back into writing mode, it’s a good time, an appropriate moment also to get back into posting here. Once again, one of the prompts for doing so is a browser full of tabs. A lot of interesting pages to digest and some sense that their contents will be useful later.

In general, I would say that the pages that remain open, that persist, in my web browsing fall into two categories which I have not yet been able to resolve into one. The first category is making and manufacturing and the future of work in the world. It results in open tabs like:

  • An Ars Technica interview with Cory Doctorow on his new novel, Walkaway in which Doctorow imagines a post-scarcity world built upon his interest in open-source software, reputation management, and other ideas that have long fascinated him. (I confess that I tried reading his Makers but it just didn’t work for me.)

  • In the interview, Doctorow mentions Bruce Sterling’s Shaping Things, which seems worth a read, since it aspires to be both a history of how we have used energy and matter to create objects in our world but also how we might go about doing that in the future.

  • Also in this vein of the future of work or the future of ideas about work is a Guardian column on how the privatization of innovation in the U.S. is in fact starving the country of its innovation. What Ben Tarnoff argues is that private firms and private capital are not capable of taking the kind of risk that the public sector can.

Now, some of these things I read because part of me wants to write a follow-up book to The Amazing Crawfish Boat that focuses on how to address, or redress, issues that not necessarily the technology boom has brought about but the changes in our thinking: sometimes we get a little carried away. When I read things like the following in particular, I am struck by how much it might benefit from spending time with a farmer:

Accelerationists argue that technology, particularly computer technology, and capitalism, particularly the most aggressive, global variety, should be massively sped up and intensified – either because this is the best way forward for humanity, or because there is no alternative. Accelerationists favour automation. They favour the further merging of the digital and the human. They often favour the deregulation of business, and drastically scaled-back government. They believe that people should stop deluding themselves that economic and technological progress can be controlled. They often believe that social and political upheaval has a value in itself. (Andy Beckett, 11 May 2017, “Accelerationism: how a fringe philosophy predicted the future we live in”, The Guardian LINK)

Plants take time to grow. You can’t change that. (Not a lot, anyway.) People take time to mature, to digest not only their food but also the information they ingest. The problem with the current crop of people running the show is their incredibly short lives and attention spans. (I wonder if this will change when anti-agapic is discovered. When we have longer lives will we be so stuck in short cycles? Perhaps we delude ourselves into thinking that the ping of endorphins would somehow be offset by the knowledge that we have more time. Maybe we would just have longer lives but still pass through them as junkies.)

The second category is my interest in artificial intelligence and machine learning and big data. That’s up next.