Turning Words into Numbers

As Katherine Kinnaird and I continue our work on the Tedtalks, we have found ourselves drawn to examine more closely the notion of topics, which we both feel have been underexamined in their usage in the humanities.

Most humanists use an implementation of LDA, which we will probably also use simply to stay in parallel, but at some point in our work, frustrated with my ability to get LDA to work within Python, I picked up Alan Riddell’s DARIAH tutorial and drafted an implementation of NMF topic modeling for our corpus. One advantage I noticed right away, in comparing the results to earlier work I had done with Jonathan Goodwin, was what seemed like a much more stable set of word clusters in the algorithmically-derived topics.

Okay, good, but Kinnaird noticed that stopwords kept creeping into the topics and that raised larger issues about how NMF does what it does and that meant, because she’s so thorough, backing up a bit and making sure we understand how NMF works.

What follows is an experiment to understand the shape and nature of the tf matrix, the tfidf matrix, and the output of the sklearn NMF algorithm. Some of this is driven by the following essays:

To start our adventure, we needed a small set of texts with sufficient overlap that we could later successfully derive topics from them. I set myself the task of creating ten sentences, each of approximately ten words. Careful readers who take the time to read the sentences themselves will, I hope, forgive me for the texts being rather reflexive in nature, but it did seem appropriate given the overall reflexive nature of this task.

# =-=-=-=-=-=-=-=-=-=-=
# The Toy Corpus
# =-=-=-=-=-=-=-=-=-=-= 

sentences = ["Each of these sentences consists of about ten words.",
             "Ten sentence stories and ten word stories were once popular.",
             "Limiting the vocabulary to ten words is difficult.",
             "It is quite difficult to create sentences of just ten words",
             "I need, in fact, variety in the words used.",
             "With all these texts being about texts, there will be few topics.",
             "But I do not want too much variety in the vocabulary.",
             "I want to keep the total vocabulary fairly small.",
             "With a small vocabulary comes a small matrix.",
             "The smaller the matrix the more we will be able to see how things work."]


# =-=-=-=-=-=-=-=-=-=-=
# The Stopwords for this corpus
# =-=-=-=-=-=-=-=-=-=-= 

stopwords = ["a", "about", "all", "and", "be", "being", "but", "do", "each", "few", 
             "how", "i", "in", "is", "it", "more", "much", "not", "of", "once", "the", 
             "there", "these", "to", "too", "want", "we", "were", "will", "with"]

Each text is simply a sentence in a list of strings. Below the texts is the custom stopword list for this corpus. For those curious, there are a total of 102 tokens in the corpus and 30 stopwords. Once the stopwords are applied, 49 tokens remain for a total of 31 words.

# =-=-=-=-=-=
# Clean & Tokenize
# =-=-=-=-=-=

import re
from nltk.tokenize import WhitespaceTokenizer

tokenizer = WhitespaceTokenizer()
# stopwords = re.split('\s+', open('../data/tt_stop.txt', 'r').read().lower())

# Loop to tokenize, stop, and stem (if needed) texts.
tokenized = []
for sentence in sentences:   
    raw = re.sub(r"[^\w\d'\s]+",'', sentence).lower()
    tokens = tokenizer.tokenize(raw)
    stopped_tokens = [word for word in tokens if not word in stopwords]
    tokenized.append(stopped_tokens)


# =-=-=-=-=-=-=-=-=-=-=
# Re-Assemble Texts as Strings from Lists of Words
# (because this is what sklearn expects)
# =-=-=-=-=-=-=-=-=-=-= 

texts = []
for item in tokenized:
    the_string = ' '.join(item)
    texts.append(the_string)
for text in texts:
    print(text)
sentences consists ten words
ten sentence stories ten word stories popular
limiting vocabulary ten words difficult
quite difficult create sentences just ten words
need fact variety words used
texts texts topics
variety vocabulary
keep total vocabulary fairly small
small vocabulary comes small matrix
smaller matrix able see things work
all_words = ' '.join(texts).split()
print("There are {} tokens representing {} words."
      .format(len(all_words), len(set(all_words))))
There are 49 tokens representing 31 words.

We will explore below the possibility of using the sklearn module’s built-in tokenization and stopword abilities, but while I continue to teach myself that functionality, we can move ahead with understanding the vectorization of a corpus.

There are a lot of ways to turn a series of words into a series of numbers. One of the principle ways of doing so ignores any individuated context for a particular word as we might understand it within the context of a given sentence but simply considers a word in relationship to other words in a text. That is, one way to turn words into numbers is simply to count the words in a text, reducing a text to what is known as a “bag of words.” (There’s a lot of linguistics and information science that validates this approach, but it will always chafe most humanists.)

If we run our corpus of ten sentences through the CountVectorizer, we will get a representation of it as a series of numbers, each representing the count of a particular word within a particular text:

# =-=-=-=-=-=-=-=-=-=-=
# TF
# =-=-=-=-=-=-=-=-=-=-= 
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer

vec = CountVectorizer()
tf_data = vec.fit_transform(texts).toarray()
print(tf_data.shape)
print(tf_data)
(10, 31)
[[0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 1 0]
 [0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 2 2 0 0 0 0 0 0 0 1 0 0]
 [0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 1 0]
 [0 0 0 1 1 0 0 1 0 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 1 0]
 [0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 1 0]
 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 0 1 0 0 0 0 0 0 0]
 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0]
 [0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 1 0 0 0]
 [0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 2 0 0 0 0 0 0 0 0 0 1 0 0 0]
 [1 0 0 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 1]]

The term frequency vectorizer in sklearn creates a set of words out of all the tokens, like we did above, then counts the number of times a given word occurs within a given text, returning that text as a vector. Thus, the second sentence above:

"Ten sentence stories and ten word stories were once popular." 

which we had tokenized and stopworded to become:

ten sentence stories ten word stories popular

becomes a list of numbers, or a vector, that looks like this:

0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 2 2 0 0 0 0 0 0 0 1 0 0

I chose the second sentence because it has two words that occur twice, ten and stories, so that it didn’t look like a line of binary. If you stack all ten texts on top of each other, then you get a matrix that of 10 rows, each row a text, and 31 columns, each column one of the important, lexical, words.

Based on the location of the two twos, my guess is that the CountVectorizer alphabetizes its list of words, which can also be considered as features of a text. A quick check of our set of words, sorted alphabetically is our first step in confirmation. (It also reveals one of the great problems of working with words: “sentence” and “sentences” as well as “word” and “words” are treated separately and so where a human being would regard those as two lexical entries, the computer treats it as four. This is one argument for stemming, but stemming, so far as I have encountered is, is not only no panacea, it also creates other problems.)

the_words = list(set(all_words))
the_words.sort()
print(the_words)
['able', 'comes', 'consists', 'create', 'difficult', 'fact', 'fairly', 'just', 'keep', 'limiting', 'matrix', 'need', 'popular', 'quite', 'see', 'sentence', 'sentences', 'small', 'smaller', 'stories', 'ten', 'texts', 'things', 'topics', 'total', 'used', 'variety', 'vocabulary', 'word', 'words', 'work']

We can actually get that same list from the vectorizer itself with the get_feature_names method:

features = vec.get_feature_names()
print(features)
['able', 'comes', 'consists', 'create', 'difficult', 'fact', 'fairly', 'just', 'keep', 'limiting', 'matrix', 'need', 'popular', 'quite', 'see', 'sentence', 'sentences', 'small', 'smaller', 'stories', 'ten', 'texts', 'things', 'topics', 'total', 'used', 'variety', 'vocabulary', 'word', 'words', 'work']

We can actually get the count for each term with the vocabulary_ method, which reveals that sklearn stores the information as a dictionary with the term as the key and the count as the value:

occurrences = vec.vocabulary_
print(occurrences)
{'comes': 1, 'difficult': 4, 'need': 11, 'matrix': 10, 'vocabulary': 27, 'just': 7, 'see': 14, 'quite': 13, 'smaller': 18, 'consists': 2, 'texts': 21, 'variety': 26, 'sentence': 15, 'total': 24, 'popular': 12, 'create': 3, 'work': 30, 'topics': 23, 'word': 28, 'limiting': 9, 'words': 29, 'ten': 20, 'able': 0, 'keep': 8, 'sentences': 16, 'fairly': 6, 'stories': 19, 'things': 22, 'used': 25, 'fact': 5, 'small': 17}

It’s also worth pointing out that we can get a count of particular terms within our corpus by feeding the CountVectorizer a vocabulary argument. Here I’ve prepopulated a list with three of our terms — “sentence”, “stories”, and “vocabulary” — and the function returns an array which counts only the occurrence of those three terms across all ten texts:

# =-=-=-=-=-=-=-=-=-=-=
# Controlled Vocabulary Count
# =-=-=-=-=-=-=-=-=-=-= 

tags = ['sentence', 'stories', 'vocabulary']
cv = CountVectorizer(vocabulary=tags)
data = cv.fit_transform(texts).toarray()
print(data)
[[0 0 0]
 [1 2 0]
 [0 0 1]
 [0 0 0]
 [0 0 0]
 [0 0 0]
 [0 0 1]
 [0 0 1]
 [0 0 1]
 [0 0 0]]

So far we’ve been trafficking in raw counts, or occurrences, of a word — aka term, aka feature — in our corpus. Chances are, longer, or bigger, texts which simply have more words will have more of any given word, which means they may come to be overvalued (overweighted?) if we rely only on occurrences. Fortunately, we can simply normalize by length of a text to get a value that can be used to compare how often a word is used in relationship to the size of the text across all texts in a corpus. That is, we can get a term’s frequency.

As I was working on this bit of code, I learned that sklearn stores this information in a compressed sparse row matrix, wherein a series of (text, term) coordinates are followed by a value. I have captured the first two texts below. (Note the commented out toarray method in the second-to-last line. It’s there so often in sklearn code that I had come to take it for granted.)

from sklearn.feature_extraction.text import TfidfTransformer

tf_transformer = TfidfTransformer(use_idf=False).fit(tf_data)
words_tf = tf_transformer.transform(tf_data)#.toarray()
print(words_tf[0:2])
  (0, 2)    0.5
  (0, 16)   0.5
  (0, 20)   0.5
  (0, 29)   0.5
  (1, 12)   0.301511344578
  (1, 15)   0.301511344578
  (1, 19)   0.603022689156
  (1, 20)   0.603022689156
  (1, 28)   0.301511344578

And here’s that same information represented as an array:

words_tf_array = words_tf.toarray()
print(words_tf_array[0:2])
[[ 0.          0.          0.5         0.          0.          0.          0.
   0.          0.          0.          0.          0.          0.          0.
   0.          0.          0.5         0.          0.          0.          0.5
   0.          0.          0.          0.          0.          0.          0.
   0.          0.5         0.        ]
 [ 0.          0.          0.          0.          0.          0.          0.
   0.          0.          0.          0.          0.          0.30151134
   0.          0.          0.30151134  0.          0.          0.
   0.60302269  0.60302269  0.          0.          0.          0.          0.
   0.          0.          0.30151134  0.          0.        ]]

Finally, we can also weight words within a document contra the number of times they occur within the overall corpus, thus lowering the value of common words.

# =-=-=-=-=-=-=-=-=-=-=
# TFIDF
# =-=-=-=-=-=-=-=-=-=-= 

tfidf = TfidfVectorizer()
tfidf_data = tfidf.fit_transform(texts)#.toarray()
print(tfidf_data.shape)
print(tfidf_data[1]) # values for second sentence
(10, 31)
  (0, 12)   0.338083066465
  (0, 28)   0.338083066465
  (0, 19)   0.67616613293
  (0, 15)   0.338083066465
  (0, 20)   0.447100526936

And now, again, in the more common form of an array:

tfidf_array = tfidf_data.toarray()
print(tfidf_array[1]) # values for second sentence
[ 0.          0.          0.          0.          0.          0.          0.
  0.          0.          0.          0.          0.          0.33808307
  0.          0.          0.33808307  0.          0.          0.
  0.67616613  0.44710053  0.          0.          0.          0.          0.
  0.          0.          0.33808307  0.          0.        ]
#tfidf_recall = tfidf_data.get_feature_names() # Not working

Staying within the sklearn ecosystem

What if we do all tokenization and normalization in sklearn?

import numpy as np
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer

# This is the bog-standard version from the documentation
# test_vec = CountVectorizer(input=u'content', 
#                            encoding=u'utf-8', 
#                            decode_error=u'strict', 
#                            strip_accents=None, 
#                            lowercase=True, 
#                            preprocessor=None, 
#                            tokenizer=None, 
#                            stop_words=stopwords, 
#                            token_pattern=u'(?u)\b\w\w+\b', 
#                            ngram_range=(1, 1), 
#                            analyzer=u'word', 
#                            max_df=1.0, 
#                            min_df=1, 
#                            max_features=None, 
#                            vocabulary=None, 
#                            binary=False, 
#                            dtype=<type 'numpy.int64'>)
test_vec = CountVectorizer(lowercase = True, 
                           stop_words = stopwords, 
                           token_pattern = u'(?u)\b\w\w+\b', 
                           ngram_range = (1, 1), 
                           analyzer = u'word')

#test_data = test_vec.fit_transform(texts).toarray() # --> ValueError: empty vocabulary

Counting Control Words in a Text

As I was working on a toy corpus to understand the various facets of skearn, I came across this very clear example of how to count specific words in a collection of texts:

import sklearn
cv = sklearn.feature_extraction.text.CountVectorizer(vocabulary=['hot', 'cold', 'old'])
data = cv.fit_transform(['pease porridge hot', 'pease porridge cold', 'pease porridge in the pot', 'nine days old']).toarray()
print(data)
[[1 0 0]
 [0 1 0]
 [0 0 0]
 [0 0 1]]

Please note that I’ve changed the original a bit to make it easier to deploy this is a longer script.

Test Post with JP Markdown and Syntax Highlighting Activated

Okay, here’s some regular prose, which isn’t explanatory at all, and then here comes a block of code:

from stop_words import get_stop_words
from nltk.corpus import stopwords

mod_stop = get_stop_words('en')
nltk_stop = stopwords.words("english")

print("mod_stop is {} words, and nltk_stop is {} words".format(len(mod_stop), len(nltk_stop)))

returns:

mod_stop is 174 words, and nltk_stop is 153 words

Getting Word Frequencies for 2000+ Texts

What I’ve been working on for the past few days is in preparation for attempting a topic model using the more established LDA instead of the NMF to see how well they compare — with the understanding that since there is rarely a one-to-one matchup within either method, that there will be no such match across them.

Because LDA does not filter out common words on its own, the way the NMF method does, you have to start with a stoplist. I know we can begin with Blei’s and a few other established lists, but I would also like to be able to compare that against our own results. My first thought was to build a dictionary of words and their frequency within the corpus. For convenience sake, I am using the NLTK.

Just as a record of what I’ve done, here’s the usual code for loading the talks from the CSV with everything in it:

[code lang=python]
import pandas
import re

# Get all talks in a list & then into one string
colnames = ['author', 'title', 'date' , 'length', 'text']
df = pandas.read_csv('../data/talks-v1b.csv', names=colnames)
talks = df.text.tolist()
alltalks = " ".join(str(item) for item in talks) # Solves pbm of floats in talks

# Clean out all punctuation except apostrophes
all_words = re.sub(r"[^\w\d'\s]+",'',alltalks).lower()
[/code]

We still need to identify which talks have floats for values and determine what impact, if any, it has on the project.

[code lang=python]
import nltk

tt_tokens = nltk.word_tokenize(all_words)

tt_freq = {}
for word in tt_tokens:
try:
tt_freq[word] += 1
except:
tt_freq[word] = 1
[/code]

Using this method, the dictionary has 63426 entries. Most of those are going to be single-entry items or named entities, but I do think it’s worth looking at them, as well as the high-frequency words that may not be a part of established stopword lists: I think it will be important to note those words which are specifically common to TED Talks.

I converted the dictionary to a list of tuples in order to be able to sort — I see that there is a way to sort a dictionary in Python, but this is a way I know. Looking at the most common words, I see NLTK didn’t get rid of punctuation: I cleared this up by removing punctuation earlier in the process, keeping the contractions (words with apostrophes), which the NLTK does not respect.

N.B. I tried doing this simply with a regex expression that split on white spaces, but I am still seeing contractions split into different words.

[code lang=python]
tt_freq_list.sort(reverse=True)
tt_freq_list[0:20]

[(210294, 'the'),
(151163, 'and'),
(126887, 'to'),
(116155, 'of'),
(106547, 'a'),
(96375, 'that'),
(83740, 'i'),
(78986, 'in'),
(75643, 'it'),
(71766, 'you'),
(68573, 'we'),
(65295, 'is'),
(56535, "'s"),
(49889, 'this'),
(37525, 'so'),
(33424, 'they'),
(32231, 'was'),
(30067, 'for'),
(28869, 'are'),
(28245, 'have')]
[/code]

Keeping the apostrophes proved to be harder than I thought — and I tried going a “pure Python” route and splitting only on white spaces, trying both of the following:

[code lang=python]
word_list = re.split('\s+', all_words)
word_list = all_words.split()
[/code]

I still got: (56535, "'s"),. (The good news is that the counts match.)

Okay, good news. The NLTK white space tokenizer works:

[code lang=python]
from nltk.tokenize import WhitespaceTokenizer
white_words = WhitespaceTokenizer().tokenize(all_words)
[/code]

I tried using Sci-Kit Learn’s CountVectorizer but it requires a list of strings, not one string, and it does not like that some of the texts are floats. So, we’ll save dealing with that when it comes to looking at this corpus as a corpus and not as one giant collection of words.

[code lang=python]
from sklearn.feature_extraction.text import CountVectorizer

count_vect = CountVectorizer()
word_counts = count_vect.fit_transform(talks)

ValueError: np.nan is an invalid document, expected byte or unicode string.
[/code]

The final, working, script of the day produces the output we want:

[code lang=python]
<br /># Tokenize on whitespace
from nltk.tokenize import WhitespaceTokenizer
tt_tokens = WhitespaceTokenizer().tokenize(all_words)

# Build a dictionary of words and their frequency in the corpus
tt_freq = {}
for word in tt_tokens:
try:
tt_freq[word] += 1
except:
tt_freq[word] = 1

# Build a list of tuples, sort, and see some results
tt_freq_list = [(val, key) for key, val in tt_freq.items()]
tt_freq_list.sort(reverse=True)
tt_freq_list[0:20]
[/code]

Top 10 Python libraries of 2016

Tryo Labs is continuing its tradition of retrospectives about the best Python libraries for the past year. This year, it seems, it’s all about serverless architectures and, of course, AI/ML. A lot of cool stuff happening in the latter space. Check out this year’s retrospective and also the discussion on Reddit. (And here’s a link to Tryo’s 2015 retrospective for those curious.)

Flowingdata has a list of their own: Best Data Visualization Projects of 2016/. If you haven’t seen the one about the evolution of bacteria that is a “live” visualization conducted on a giant petri dish, check it out.

Building a Corpus-Specific Stopword List

How do you go about finding the words that occur in all the texts of a collection or in some percentage of texts? A Safari Oriole lesson I took in recently did the following, using two texts as the basis for the comparison:

[code lang=python]
from pybloom import BloomFilter

bf = BloomFilter(capacity = 1000, error_rate = 0.001)

for word in text1_words:
bf.add(word)

intersect = set([])

for word in text2_words:
if word in bf:
intersect.add(word)

print(intersect)
[/code]

UPDATE: I’m working on getting Markdown and syntax highlighting working. I’m running into difficulties with my beloved Markdown Extra plug-in, indicating I may need to switch to the Jetpack version. (I’ve switched before but not been satisfied with the results.)

Towards an Open Notebook Built on Python

As noted earlier, I am very taken with the idea of moving to an open notebook system: it goes well with my interest in keeping my research accessible not only to myself but also to others. Towards that end, I am in the midst of moving my notes and web captures out of Evernote and into DevonThink — a move made easier by a script that automates the process. I am still not a fan of DT’s UI, but its functionality cannot be denied or ignored. It quite literally does everything. This also means moving my reference library out of Papers, which I have had a love/hate relationship with for the past few years. (Much of this move is, in fact, prompted by the fact that I don’t quite trust the program after various moments of failure. I cannot deny that some of the failings might be of my own making, but, then again, this move I am making is to foolproof systems from the fail/fool point at the center of it all, me.)

Caleb McDaniel’s system is based on Gitit, which itself relies on Pandoc to do much of the heavy lifting. In his system, bibtex entries appear at the top of a note document and are, as I understand it, compiled as needed into larger, comprehensive bibtex lists. To get the bibtex entry at the top of the page into HTML for the wiki, McDaniel uses an OCAML library.

Why not, I wondered as I read McDaniel, attempt to keep as much of the workflow as possible within a single language. Since Python is my language of choice — mostly because I am too time and mind poor to attempt to master anything else — I decided to make the attempt in Python. As luck would have it, there is a bibtex2html module available for Python: [bibtex2html](https://github.com/goliveira/bibtex2html).

Now, whether the rest of the system is built on Sphinx or with MkDocs is the next matter — as is figuring out how to write a script that chains these things together so that I can approach the fluidity and assuredness of McDaniel.

I will update this post as I go. (Please note that this post will stay focused on the mechanics of such a system.)

Listing Python Modules

Sometimes you need to know what Python modules you already installed, the easiest way to get a list is to:

[code lang=python]
help('modules')
[/code]

This will give you a list of installed modules typically as a series of columns. All you have are names, not version numbers. If you need to know version numbers, then try:

[code lang=python]
import matplotlib
print(matplotlib.__version__)
1.5.1
[/code]

Python Site Generators

I have never been particularly impressed with Moodle, the learning management system used by my university and a number of other organizations. Its every impulse, it seems to me, is to increase the number of steps to get simple things done, I suppose to simplify more complex things for users with less tech savvy. Using markdown, for example, is painful and there’s no way to control the presentation of materials unless you resort to one of its myriad of under-explained, and probably under-thought, content packaging options. (I’ve never grokked the Moodle “book”, for example.)

To be honest, there are times when I feel the same way about WordPress, which has gotten GUIer and less sharp on a number of fronts — why oh why are categories and tags now unlimited in application?

I’m also less than clear on my university’s approach to intellectual property: they seem rather keen to claim everything and anything as their own, when they can’t even be bothered to give you basic production tools. (Hello? It’s been three years since I had access to a printer that didn’t involve me copying files to a flash drive and walking down stairs to load things onto a Windows machine that can only ever print PDFs.)

I decided I would give static site generation a try, particularly if I could compose in markdown, ReST, or even a Jupyter notebook (as a few of the generators appear to promise). I’m not interested in using this for blogging, and I will probably maintain it on a subdirectory of my own site, e.g. /teaching, and I hope to be able to sync between local and remote versions using Git. That seems straightforward, doesn’t it? (I’m also now thinking that I will stuff everything into the same directory and just have different pages, and subpages?, for each course. Just hang everything out there for all to see.

As for the site generators themselves, there are a number of options:

  • Pelican is a popular one, but seems very blog oriented.
  • I’ve installed both Pelican and Nikola, and I ran the latter this morning and was somewhat overwhelmed by the number of directories it generated right away.
  • Cactus seems compelling, and has a build available for the Mac.
  • There is also Hyde.
  • I’m going to ignore blogofile for now, but it’s there and its development is active.
  • If all else fails, I have used Poole before. It doesn’t have a templating system or Javascript of any of that, but maybe it’s better for it.

More on Normalizing Sentiment Distributions

Mehrdad Yazdani pointed out that some of my problems in normalization may have been the result of not having the right pieces in place, and so suggested some changes to the sentiments.py script. The result would seem to suggest that the two distributions are now comparable in scale — as well as on the same x-axis. (My Python-fu is not strong enough, yet, for me to determine how this error crept in.)

Mehrdaded Sentimental Outputs

Raw Sentiment normalized with np.max(np.abs(a_list))

When I run these results through my averaging function, however, I get significant vertical compression:

Averaged Mehrdaded Sentiments

Averaged Sentiment normalized with np.max(np.abs(a_list))

If I substitute np.linalg.norm(a_list) for np.max(np.abs(a_list)) in the script, I get the following results:

Raw Sentiment Normalized with numpy.linalg.norm

Raw Sentiment Normalized with numpy.linalg.norm

Averaged Sentiment Normalized with numpy.linalg.norm

Averaged Sentiment Normalized with numpy.linalg.norm

A Tale of Two Sentimental Signatures

I’m still working my way through the code that will, I hope, make it possible to compare effectively different sentimental modules in Python. While the code is available as a GitHub [], I wanted to post some of the early outcomes here, publishing my failure, as it were.

I began with the raw sentiments, which is not very interesting, since the different modules use different ranges: quite wide for Afinn, -1 to 1 for TextBlob, and between 0 and 1 for Indico.

Raw Sentiments: Afinn, Textblob, Indico

Raw Sentiments: Afinn, Textblob, Indico

To make them more comparable, I needed to normalize them, and to make the whole of it more digestible, I needed to average them. I began with normalizing the values — see the [] — and you can already see there’s a divergence in the baseline for which I cannot yet account in my code:

Normalized Sentiment: Afinn and TextBlob

Normalized Sentiment: Afinn and TextBlob

To be honest, I didn’t really notice this until I plotted the average, where the divergence becomes really apparent:

Average, Normalized Sentiments: Afinn and TextBlob

Average, Normalized Sentiments: Afinn and TextBlob

: https://gist.github.com/johnlaudun/5ea8234cc8d6f39b982648704c3824b0