Compelling Visualization Projects

The Rhythm of Food combines data from FooDB and Google Trends, looking for search patterns across time — and cleverly recognizing that the cycle of the year is a good way to organize time.

Of Types, Motifs, Tropes

For our next class, we are going to go a-hunting, tale-type hunting. I am going to bring an assortment of texts, some folktales and some not, that I will give you to track down. Your means of determining the nature of the texts will be the Tale-Type Index and the Motif Index. You will, I think, fairly quickly figure out how to use those two instruments to your best advantage.

It might also be a good moment to think about the nature of such cataloging efforts. One place to begin, as a kind of quick review of the origins and development of the indices is the Wikipedia entry on the Aarne–Thompson classification systems. (There is a separate entry on motif worth reading.) Once there, you will see a reference to a rather recent, in terms of the indices themselves, consideration by Alan Dundes’ “The Motif-Index and the Tale Type Index: A Critique”. (There is also Hans-Jörg Uther’s assessment in “Classifying Folktales”.)

The two indices work together to catalogue those tales within their pages by their constiuent parts, motifs. As a number of observers have remarked, this is no small matter and has lead some to regard the entire enterprise as hopeless, given the seemingly endless variability of the human imagination.

And yet, as seemingly old-fashioned as the tale-type and motif indices would seem to be, we have re-created them in TV Tropes. And so, it would seem, some of you have already played a drinking game to tale types. Congratulations.

Test Post with JP Markdown and Syntax Highlighting Activated

Okay, here’s some regular prose, which isn’t explanatory at all, and then here comes a block of code:

from stop_words import get_stop_words
from nltk.corpus import stopwords

mod_stop = get_stop_words('en')
nltk_stop = stopwords.words("english")

print("mod_stop is {} words, and nltk_stop is {} words".format(len(mod_stop), len(nltk_stop)))

returns:

mod_stop is 174 words, and nltk_stop is 153 words

Jetpack Markdown Troubleshooting

Here’s a screenshot of what a fenced code block looks like with both the Jetpack markdown turned on and the Syntax Highlighting Evolved plug-in activated:

And here it is with the syntax highlighter turned off. I don’t quite understand why the code block WP short code is showing up:

Getting Word Frequencies for 2000+ Texts

What I’ve been working on for the past few days is in preparation for attempting a topic model using the more established LDA instead of the NMF to see how well they compare — with the understanding that since there is rarely a one-to-one matchup within either method, that there will be no such match across them.

Because LDA does not filter out common words on its own, the way the NMF method does, you have to start with a stoplist. I know we can begin with Blei’s and a few other established lists, but I would also like to be able to compare that against our own results. My first thought was to build a dictionary of words and their frequency within the corpus. For convenience sake, I am using the NLTK.

Just as a record of what I’ve done, here’s the usual code for loading the talks from the CSV with everything in it:

[code lang=python]
import pandas
import re

# Get all talks in a list & then into one string
colnames = ['author', 'title', 'date' , 'length', 'text']
df = pandas.read_csv('../data/talks-v1b.csv', names=colnames)
talks = df.text.tolist()
alltalks = " ".join(str(item) for item in talks) # Solves pbm of floats in talks

# Clean out all punctuation except apostrophes
all_words = re.sub(r"[^\w\d'\s]+",'',alltalks).lower()
[/code]

We still need to identify which talks have floats for values and determine what impact, if any, it has on the project.

[code lang=python]
import nltk

tt_tokens = nltk.word_tokenize(all_words)

tt_freq = {}
for word in tt_tokens:
try:
tt_freq[word] += 1
except:
tt_freq[word] = 1
[/code]

Using this method, the dictionary has 63426 entries. Most of those are going to be single-entry items or named entities, but I do think it’s worth looking at them, as well as the high-frequency words that may not be a part of established stopword lists: I think it will be important to note those words which are specifically common to TED Talks.

I converted the dictionary to a list of tuples in order to be able to sort — I see that there is a way to sort a dictionary in Python, but this is a way I know. Looking at the most common words, I see NLTK didn’t get rid of punctuation: I cleared this up by removing punctuation earlier in the process, keeping the contractions (words with apostrophes), which the NLTK does not respect.

N.B. I tried doing this simply with a regex expression that split on white spaces, but I am still seeing contractions split into different words.

[code lang=python]
tt_freq_list.sort(reverse=True)
tt_freq_list[0:20]

[(210294, 'the'),
(151163, 'and'),
(126887, 'to'),
(116155, 'of'),
(106547, 'a'),
(96375, 'that'),
(83740, 'i'),
(78986, 'in'),
(75643, 'it'),
(71766, 'you'),
(68573, 'we'),
(65295, 'is'),
(56535, "'s"),
(49889, 'this'),
(37525, 'so'),
(33424, 'they'),
(32231, 'was'),
(30067, 'for'),
(28869, 'are'),
(28245, 'have')]
[/code]

Keeping the apostrophes proved to be harder than I thought — and I tried going a “pure Python” route and splitting only on white spaces, trying both of the following:

[code lang=python]
word_list = re.split('\s+', all_words)
word_list = all_words.split()
[/code]

I still got: (56535, "'s"),. (The good news is that the counts match.)

Okay, good news. The NLTK white space tokenizer works:

[code lang=python]
from nltk.tokenize import WhitespaceTokenizer
white_words = WhitespaceTokenizer().tokenize(all_words)
[/code]

I tried using Sci-Kit Learn’s CountVectorizer but it requires a list of strings, not one string, and it does not like that some of the texts are floats. So, we’ll save dealing with that when it comes to looking at this corpus as a corpus and not as one giant collection of words.

[code lang=python]
from sklearn.feature_extraction.text import CountVectorizer

count_vect = CountVectorizer()
word_counts = count_vect.fit_transform(talks)

ValueError: np.nan is an invalid document, expected byte or unicode string.
[/code]

The final, working, script of the day produces the output we want:

[code lang=python]
<br /># Tokenize on whitespace
from nltk.tokenize import WhitespaceTokenizer
tt_tokens = WhitespaceTokenizer().tokenize(all_words)

# Build a dictionary of words and their frequency in the corpus
tt_freq = {}
for word in tt_tokens:
try:
tt_freq[word] += 1
except:
tt_freq[word] = 1

# Build a list of tuples, sort, and see some results
tt_freq_list = [(val, key) for key, val in tt_freq.items()]
tt_freq_list.sort(reverse=True)
tt_freq_list[0:20]
[/code]

MacBook Options in Early 2017

I realized at some point recently that when I teach and when I present at conferences, I am using my personal laptop, putting it at risk when my university should be providing me the proper equipment to do those things. Fortunately, I have a bit of money left over from my professorship, and so I looked into what my portability options are:

One consideration would be the 11-inch MacBook Air, now discontinued (and never given the love it deserved):

Amazon has one for $700: Apple MacBook Air MD711LL/B 11.6-Inch Laptop (1.4GHz Intel Core i5 Dual-Core up to 2.7GHz, 4GB RAM, 128GB SSD, Wi-Fi, Bluetooth 4.0) (Certified Refurbished).

Apple has one for $849. MacBook Air 11.6/1.6GHZ/4GB/128GB Flash. March 2015.

Or one for $929: Refurbished 11.6-inch MacBook Air 1.6GHz Dual-core Intel Core i5. Originally released March 2015. 11.6-inch. 4GB of 1600MHz LPDDR3 onboard memory. 256GB PCIe-based flash storage. 720p FaceTime HD Camera. Intel HD Graphics 6000.

With that price, I thought I should look into something more readily affordable: the 9.7-inch iPad Pro Wi-Fi 32GB – Space Gray released in March 2016 lists for $579.

That’s not bad, but I a colleague of mine recently ordered one and I took one look at the size of the keyboard and thought: no. So that leaves the more expensive option, especially since my university won’t buy refurbished gear of the MacBook: 12.0/1.1GHz Dual-Core Intel Core m3/8GB/256GB Flash. April 2016. $1,249. (There was a refurbished version on the website for not a lot less, $1189, but it did have a 512GB SSD. Win some, lose some.)