I am looking to build a better, more readily understood argument for why `qualitative + quantitative` is better than simply qualitative. Too many humanists react too quickly to the idea of computational/digital/algorithmic approaches to humanistic topics as simply wrong.
On the other side of the equation are information/computer scientists/enthusiasts/pundits who imagine that quantitative analyses will eventually subsume everything. [Zeynep Tufekci has a terrific reflection][zt] on how [538 got the outcome of the Germany-Brazil World Cup match so wrong]. (As in they claimed that the stats favored Brazil.) In it she notes that:
> Instead of the aggressive pundit-versus-data stance taken by some big data proponents, it’s important to recognize that substantive area experts are often pretty good at recognizing measurement errors. … If the substantive experts are deemed unreliable, another option is “qualitative pull-outs” of your data to check for measurement error. Watch a game with, say, three experts, and count the uncalled fouls and specious, undeserved, penalty shots as judged by the experts. This can even be quantified as an index of measurement error based on qualitative examination (which will have its own measurement error because it’s turtles all the way down, folks—but intercoder reliability, technical way of saying “how much we all agree” can give a sense of scale of error.)