All of us owe a huge debt of gratitude to [Maida Owens][mo] and the [Louisiana Folklife Program][lfp]. She has single-handedly persevered in getting almost all the contents, at least the tables of such if not the content itself, of the entire run of the [_Louisiana Folklore Miscellany_ online][lfm]. Later issues, like the two issues I edited on *Cultural Catholicism* and *In the Wake of the Storms* also have the articles available. (The contents are in chronological order with the oldest first, so those issues are toward the bottom of the page.)
My friend Jason Jackson passes on the news that at the annual meeting of the [Linguistics Society of America](http://lsadc.org/), the following resolution was passed:
> Whereas modern computing technology has the potential of advancing linguistic science by enabling linguists to work with datasets at a scale previously unimaginable; and
> Whereas this will only be possible if such data are made available and standards ensuring interoperability are followed; and
> Whereas data collected, curated, and annotated by linguists forms the empirical base of our field; …
> Therefore, be it resolved at the annual business meeting on 8 January 2010 that the Linguistic Society of America encourages members and other working linguists to:
> * make the full data sets behind publications available, subject to all relevant ethical and legal concerns; …
> * work towards assigning academic credit for the creation and maintenance of linguistic databases and computational tools; and
> * when serving as reviewers, expect full data sets to be published (again subject to legal and ethical considerations) and expect claims to be tested against relevant publicly available datasets.
As part of my [evolving relationship with Amazon.com](no link yet), I became aware of Amazon Web Services’ [AWS in Education](http://aws.amazon.com/education/) program:
> AWS in Education provides a set of programs that enable the worldwide academic community to easily leverage the benefits of Amazon Web Services for teaching and research. With AWS in Education, educators, academic researchers, and students can apply to obtain free usage credits to tap into the on-demand infrastructure of Amazon Web Services to teach advanced courses, tackle research endeavors and explore new projects – tasks that previously would have required expensive up-front and ongoing investments in infrastructure.
> With AWS you can requisition compute power, storage, database functionality, content delivery, and other services — gaining access to a suite of elastic IT infrastructure services as you demand them. AWS enables the academic community to inexpensively and rapidly build on global computing infrastructure to pursue course projects and accelerate their productivity and research results, while enjoying the same benefits of reliability, elasticity, and cost-effectiveness used by industry. The AWS in Education program offers: Teaching Grants for educators using AWS in courses (plus access to selected course content resources); Research Grants for academic researchers using AWS in their work; Project Grants for student organizations pursuing entrepreneurial endeavors; Tutorials for students that want to use AWS for self-directed learning; Solutions for university administrators looking to use cloud computing to be more efficient and cost-effective in the university’s IT Infrastructure.
The National Academies Press has just released a 180-page book on [Ensuring the Integrity, Accessibility, and Stewardship of Research Data in the Digital Age](http://www.nap.edu/catalog.php?record_id=12615). The link will take you to the book’s page on the press’s website. It’s available as a paperback for $31.46, as a PDF for $27, or as a combo for $41. You can also follow a link on the page to read it on-line for free.
An article in a recent [PNAS (Proceedings of the National Academy of Sciences)](http://www.pnas.org/) describes the use of stylometry, the study of artwork through math and statistics, to analyze paintings in order to determine if they are authentic to the attributed master, to a student, or are a fake. The paper describes a technique called *sparse coding*, in which “analysts break down works of art into tiny patches and represent them as a series mathematical functions. By comparing the functions produced with authentic artwork to those from possible imitators, they can produce an objective measure of whether the piece in question is real or fake.” The [cover story on Ars Technica](http://arstechnica.com/science/news/2010/01/sparse-coding-technique-applied-to-art-authentication.ars) explains:
> Sparse coding was originally developed for studying how neurons in the brain responded to visuals. It works by breaking down an image—for simplicity’s sake, usually one in grayscale—into mathematical functions, pixel by pixel. The images that are broken down are just small patches of whole works, not much more than a dozen pixels square.
A recent story in the *New York Times* reveals what all long-time observers of the humanities know already: in the era of careerism, the humanities are a “hard sell.” (The quotation marks are there to emphasize that the irony of using that phrase is quite purposeful.) [Kate Zernike’s story](http://www.nytimes.com/2010/01/03/education/edlife/03careerism-t.html) profiles a number of universities, one of which is my very own. (The shuttering of the philosophy department is mentioned early in the piece, but there is no further commentary nor mention of UL Lafayette.)
As we begin this new year in 2010 with no new works coming into the public domain, it’s important to think about how exactly two things we like to create and accumulate, knowledge and wealth, get created. Tim O’Reilly, founder of O’Reilly Press has a [great post](http://radar.oreilly.com/2009/11/the-war-for-the-web.html) that lays out some of the dimensions in terms of commerce, but much of what he note applies equally well to knowledge and should be something humanists think about.
A recent trip through old podcasts brought me back to this great interview by [David Battino](http://www.oreillynet.com/pub/au/2032) with Peter Drescher, a sound designer who has created some remarkable music that all of us have heard: he’s the guy who makes the default ringtones for various mobile phone manufacturers.
That sounds immediately boring and mechanical and, well, corporate, but he takes his job seriously and all those labels that we are so quick to apply are things he himself knows. His Sisyphian task results in some interesting observations about what makes sound interesting to us, especially musical sounds. One of the things he reminds us is that the kind of ready repetition of music with which we are all now not only familiar but sometimes dependent — that is, recorded music — is really [a rather recent phenomena](http://broadcast.oreilly.com/2009/12/the-myth-of-music-ownership.html). (The link is to a piece by Peter Drescher entitled “The Myth of Music Ownership.)
Even within recorded music, however, the human mind between the ears seeks variation. Check it out. It’s short and full of great examples: [Peter Drescher on Annoying Audio](http://downloads.oreilly.com/digitalmedia/2007/03/30/dmi10-annoying-audio.mp3) — link is to MP3. (I had an embedded QT player, but I couldn’t get it not to pre-load the audio.)
[The last issue of InfoBits](http://its.unc.edu/TeachingAndLearning/publications/tlinfobits/CCM3_008445) was published this month. While I was never a heavy user of the service/bibliography, it was always nice to know it was there, to have it there. Perhaps this marks the beginning on one era of computing/IT in the humanities or perhaps it simply reveals how much such things are functions of particular individuals — to whom we later recognize we owe a debt — or perhaps it reveals only a particular moment in the funding of higher education in the U.S. No telling which way to read these tea leaves.
The Modern Language Association, the dominant professional organization among professors of language and literature, has tried over the past decade to confront the emergence of digital forms of communication from within the ranks of its members. The problem has been, of course, that the people most interested in doing it are usually at the bottom of the power (barrel? pyramid? ladder?) and those at the top often have a hard time grasping *why* someone would prefer something in the ephemeral ether let alone *how* they might go about doing it and *what* it does for scholarship. They have a working group, and they have some working policy documents up. [Now they have a wiki](http://wiki.mla.org/index.php/Evaluation_Wiki).
Over at HiveLogic, Dan Benjamin has a great [guide on Podcasting Equipment][hl], which has been updated for 2009. He distinguishes between four types of users: beginner, entry, mid-range, and prosumer. (Okay, that isn’t a very coherent typology, but the scenarios he provides for each are clear enough to be helpful to anyone curious.)
***Note for readers**: this post is currently in process while I am in Boise for the AFS meeting.*
For those who attended the forum at the annual meeting of American Folklore Society in Boise this year, here are a few posts that form the background to my current thinking:
* [In the Era of the Meta-Platform, Content Is King][era] (20 April 2008) is sort of a foundational statement, for me, of what I think the possibilities are more broadly for the humanities.
* In [The Road to Digital Considered][road] (14 August 2008) I discuss …
* In [The Cult of the Author in the New Economy][cult] (8 August 2009) …
* In [The Difference Digital Makes][diff] (29 July 2009) …
* In [One Digital Difference][one] (16 July 2009) …
* In [The Future of Scholarly Publishing from an Individual Perspective][future] (29 January 2009) …
I wasn’t at this year’s Museums and the Web conference, but I was checking it out while I was considering applying for the 2010 meeting in Denver. (I did apply, with the hope of getting some feedback on the Virtual Vermilionville idea.) The Indianapolis Art Museum was the host institution this year, and so its director, Maxwell Anderson, gave the opening keynote speech. Anyone who’s been to such keynotes knows they can be fairly divergent in quality, but Anderson’s thoughts, especially on how the on-line realm can open the museum out to visitors and, in effect, invite them in, is a really good one, and one I hope to pursue in this work with Vermilionville.
[Paul Graham][pg] is proof positive that usually the best writers are some of the best thinkers. (We have done ourselves a terrible disservice by separating the two, but that is for another time.) Not only is Graham one of the best essayists at work today, he is also someone who knows how to find solutions to problems. Witness his most recent challenge:
> RFS 1: The Future of Journalism
> Newspapers and magazines are in trouble. We think they will mostly die, because we think we know what will replace them, and it is too far from their current model for them to reach it in time.
> And yet people still need at least some of what they do. You can’t have aggregators without content. So what will the content site of the future look like? And how will you make money from it? These questions turn out to be very closely related. Just as they were for print media, initially. The reason newspapers and magazines are dying is that what they do is no longer related to how they make money from it. In fact, most journalists probably don’t even realize that the definition of journalism they take for granted was not something that sprang fully-formed from the head of Zeus, but is rather a direct though somewhat atrophied consequence of a very successful 20th century business model.
> What would a content site look like if you started from how to make money—as print media once did—instead of taking a particular form of journalism as a given and treating how to make money from it as an afterthought?
> (The good news is, we think the writing will actually end up being better.)
> Groups applying to work on this idea should include at least one person who can write well and rapidly about any topic, one or more programmers who are good at statistics, data mining, and making sites scale, and someone who’s reasonably competent at graphic design. These functions can of course be combined, and in fact it’s even better if they are. Ex-Googlers would be particularly well suited to this project.