What We Talk about When We Talk about Stories

Rejected for a special issue of the Journal of Cultural Analytics, but, still, I think, an interesting project and one I will continue to pursue. If anyone else is interested, this is part of a larger project I have in mind and I am open to there being a working group.

Current efforts to treat narrative computationally tend to focus on either the very small or the very large. Studies of small texts, some only indifferently narrative in nature, have been the focus for those interested in social media, networks, and natural language technologies, which are largely dominated by the fields of information and computer sciences. Studies of large texts, so large that they contain many kinds of modalities with narrative the dominant, have largely been the purview of the field we now tend to call the digial humanities, dominated by the fields of literary studies, classics, and history.

The current work proposes to examine the texts that fall in the middle: larger than a few dozen words, but smaller than tens, or hundreds, of thousands of words. These are the texts that have historically been the purview of two fields that themselves line either side of the divide between the humanities and the human sciences, folklore studies and anthropology (respectively).

The paper profiles the knot of issues that keep these texts out of our scholarly-scientific systems. The most significant issue is the matter of “visibility”, of accessibility, of these texts as texts and thus also as data: largely oral by nature, most folk or traditional narratives (must) have been the product of a transcription process that cannot guarantee the same kind of textuality of a “born literary” text. (The borrowing of the notion of natality is somewhat purposeful here, since we often distinguish between texts that have been, sometimes laboriously, digitized and those that were “born digital.”) As scholarly fictions, if you will, they are largely embedded within the texts that treat them, only occasionally available in collections. With limited availability, and traditionally outside the realm of the fields that currently dominate the digital humanities, folk/traditional/oral narratives are not yet a part of the larger project to model narrative nor of efforts to consider the “shape of stories.”

This accessibility gap has overlooked both human and textual populations: most of the world’s verbal narratives are in fact oral in nature and millions upon millions are produced everyday by millions and millions of people and those narratives tend to range in size from somewhere around a hundred words to, perhaps, a few thousand words in length. The result is that any current model or notion of shape simply has allowed the wrong “figures figure figures.” Put another way, there can be no shape of stories without these stories.

Populating the Popular

With the rise of Lore from an obscure podcast about odd moments in “history,” to an Amazon production, there was been a concomitant rise in interest in the possibilities for expanding the scope of the engagement between folklore studies and some form of a “popular audience.” At least two folklorists I know have been contacted by production companies looking to be a part of this emergent interest.

Like its cousin, history, folklore studies has had a strange, and often estranged relationship with popular media. Some of the popular contact has been initiated by folklorists themselves: e.g., Jan Harold Brunvand. Brunvand was a much beloved individual among the folklorists I know, which seems to be unlike how historians felt about, say, Stephen Ambrose — I know, I know, Ambrose had other issues (e.g., plagiarism). There’s also the recent discussion among historians about (yet another) Ken Burns’ film. (See Jonathan Zimmerman’s “What’s So Bad about Ken Burns?”.

Jeffrey Tolbert has written about this and even engaged in a dialogue with the creator of Lore. (For those interested, Tolbert has a personal essay in New Directions in Folklore: [here][].

[here: https://scholarworks.iu.edu/journals/index.php/ndif/article/view/20037

Ignoring Unicode Decode Errors

Working with a sample corpus this morning of fraudulent emails — Rachael Tatman’s Fraudulent Email Corpus on Kaggle, I found myself not able to get past reading the file, thanks to decoding errors:

codec can't decode byte 0xc2

Oof. That byte 0xc2 has bitten me before — I think it may be a Windows thing, but I don’t remember right now, and, more importantly, I don’t care. Data loss is not important in this moment, so simply ignoring the error is my best course forward:

import codecs

fh = codecs.open( "fraudulent_emails_small.txt", "r", encoding='utf-8', errors='ignore')

And done. Thanks, as usual, to a great StackOverflow thread.

BTW, thank you Rachael for making the dataset available!

ACB at LBF

Louisiana Book Festival 2017

I am delighted to announce that The Amazing Crawfish Boat will be one of the featured books at this year’s Louisiana Book Festival. The book talk is scheduled for Saturday afternoon, 3:30 p.m. to 4 p.m. in the First Floor Meeting Room of the Capitol Park Museum. If you’re at the Festival, come say hello or swing by the festival’s store after the talk to find me signing books. See you there!

AIs Talk among Themselves

While science fiction has a long history of human-AI/robot interaction, especially in terms of dialogue, the idea of robots/AIs talking to each other gained a lot more currency in the wake of two Facebook AIs seemingly developing their own language. First, a more reasoned summary of what happend at Facebook from the BBC. And now something a bit more sensational. This Quora post also has a bit more on what happened at Facebook.

All of this concern about AIs talking to each other has a history, at least in science fiction. One moment to consider occurred in 1970’s The Forbin Project in which the USA build a supercomputer to oversee its strategic defense systems (missiles, bombers, you name it), only to discover that the USSR (now Russia) had a similar computer. It’s not too long before the two computers demand to talk directly to each other, then merge to form “World Control.”

One good place to start a larger history of robots and AIs talking to each other is Emily Asher-Perrin’s survey on Tor. (Tor is a long-time publisher of science fiction and fantasy literature; their website contains a mix of original fiction, thoughtful essays, and read or watch-alongs of classic or beloved works in the genres.)

(Perhaps one thing to think about is the difference between robots as corporealized entities and artificial intelligences as noncorporeal entities: our responses to intra-entity dialogue seems to differ significantly based on whether the consciousness is individuated in a way that our own seems to be.)

Python Modules You Didn’t Know You Needed

One of the things that happens as you nurture and grow a software stack is that you begin to take its functionality for granted, and, when you are faced with the prospect of re-creating it elsewhere or over, you realize you need better documentation. My work is currently founded on Python, and I have already documented the great architecture that is numpy + scipy + nltk + pandas + matplotlib + … you get the idea.

  • jupyter is central to how I work my way through code, and when I need to present that code, I am delighted that jupyter gives me the option to present a notebook as a collection of slides. RISE makes those notebooks fly using Reveal.js.
  • missingno “provides a small toolset of flexible and easy-to-use missing data visualizations and utilities that allows you to get a quick visual summary of the completeness (or lack thereof) of your dataset. It’s built using matplotlib, so it’s fast, and takes any pandas DataFrame input that you throw at it, so it’s flexible. Just pip install missingno to get started.”

I’ve got more … I just need to list them out.

Append a Python List Using a List Comprehension

In some code with which I am working at the moment, I need to be able to generate a list of labels based on a variable number that I provide elsewhere in a script. In this case, I am working with the Sci-Kit Learn’s topic modeling functions, and as I work iteratively through a given corpus, I am regularly adjusting the number of topics I think “fit” the corpus. Elsewhere in the script, I am using pandas to create a dataframe that contains the names of the texts as row labels and then the topic numbers will be used as column labels.

df_lda_DTM = pd.DataFrame(data= lda_W, index = docs, columns = topic_labels)

In the script, I simply use n_components to specify the number of topics which which the function, LDA or NMF, is to work.

I needed some way to generate the topic labels on the fly so that I would not be stuck with manually editing this:

topic_labels = ["Topic 0", "Topic 1", "Topic 2"]

I was able to do so with a for loop that looked like this:

topic_labels = []
for i in range(0, n_components):
    instance = "Topic {}".format(i)
    topic_labels.append(instance)

Eventually, it dawned on me that range only needs the upper bound, so I could drop the 0 inside the parenthesis:

topic_labels = []
for i in range(n_components):
    topic_labels.append("Topic {}".format(i))

That works just fine, but, while not a big block of code, this piece is part of a much longer script, and if I could get it down to a single line, using a list comprehension, I would make the overall script much easier to read, since this is just a passing bit of code that does one very small thing. One line should be enough.

Enter Python’s list comprehension, a bit of syntax sugar, as pythonistas like to call it, that I have by no means, er, fully comprehended. Still, here’s an opportunity to learn a little bit more.

So, following the guidelines for how you re-block your code within a list comprehension, I tried this:

topic_labels = [topic_labels.append("Topic {}".format(i)) for i in range(n_components)]

Better coders than I will recognize that this will not work, and will return a list of [None, None, None].

But appending a list is simply one way of building a list, of adding elements to a list, isn’t it? I could use Python’s string addition to pull this off, couldn’t I? Yes, yes I could, and did:

topic_labels = ["Topic " + str(i) for i in range(n_components)]

It couldn’t be simpler, and shorter. And it works:

print(topic_labels)
['Topic 0', 'Topic 1', 'Topic 2']