ModVers – MLab in the Humanities . University of Victoria Thu, 02 Aug 2018 16:59:24 +0000 en-US hourly 1 https://wordpress.org/?v=4.9.12 ./wp-content/uploads/2018/03/mLabLogo-70x70.png ModVers – MLab in the Humanities . 32 32 “Peer Review Personas” Published in JEP ./jep/ ./jep/#respond Fri, 26 Sep 2014 19:15:22 +0000 ./?p=4719 I am happy to announce that you’ll find “Peer Review Personas” in the new issue (17.3) of the open access venue, Journal of Electronic Publishing (JEP). This article shares research that Jentery and I conducted throughout the 2013-14 academic year, for the Implementing New Knowledge Environments (INKE) and Modernist Versions Project (MVP) projects, across both the Maker Lab and the Electronic Textual Cultures Lab at UVic.

Below’s the abstract for the article, which you can read in its entirety here. It is part of a special JEP issue on “Metrics for Measuring Publishing Value: Alternative and Otherwise.” We would like to thank INKE and the MVP for their support while conducting this research and writing the essay. Thanks, too, to Maria Bonn and Jonathan McGlone at JEP for their feedback and support during the revision process.

Arguing for the relevance of speculative prototyping to the development of any technology, this essay presents a “Peer Review Personas” prototype intended primarily for authoring and publication platforms. It walks audiences through various aspects of the prototype while also conjecturing about its cultural and social implications. Rather than situating digital scholarly communication and digital technologies in opposition to legacy procedures for review and publication, the prototype attempts to meaningfully integrate those procedures into networked environments, affording practitioners a range of choices and applications. The essay concludes with a series of considerations for further developing the prototype.

DOI: https://dx.doi.org/10.3998/3336451.0017.304 Creative Commons License


Post by Nina Belojevic, attached to the ModVers project, with the news tag. Image for this post care of the Journal of Electronic Publishing.

]]>
./jep/feed/ 0
Making Models of Modernism ./mmm/ ./mmm/#comments Mon, 12 May 2014 19:52:33 +0000 ./?p=4317 This semester, with the Modernist Versions Project and the Maker Lab, Belaid Moa (Compute Canada) and I have been topic modelling modernist texts. In doing this work, we are hoping to identify heretofore unidentified patterns, both thematic and stylistic, across a (for now, admittedly small) corpus of modernist texts.

Topic modelling assumes authors create documents using collocated clusters of words. By working “backward,” computer algorithms sort the words from a set of pre-processed documents and generate lists of words that comprise these clusters. In our work, we are using the LDA (Latent Dirichlet Allocation) probabilistic model. This rather popular model operates on the Bayesian method of inference, a mathematical concept that works backward from an observed set of data to calculate the probability of certain conditions being in place in order to produce that set of data. In other words, it depends on a notion of causality and asks what circumstances need to be in place in order for certain results to occur.

Using the MALLET package (an open source application developed primarily by Andrew McCallum at the University of Massachusetts at Amherst) allows for the implementation of Gibbs sampling, parameter optimization, and tools for inferring topics from trained models. These affordances let the researcher alter the distribution of topics across documents, and the distribution of words across topics. That is, we can adjust our model to achieve more interesting results. We are interested in a model that is, as Julia Flanders describes it, a “strategic representation,” which might “distort the scale so that we can work with the parts that matter to us” (“The Productive Unease of 21st-century Digital Scholarship”).

For the purposes of our very preliminary study, we are examining word trends across a corpus but also, at least to some extent, narrative tendencies. In so doing, we employed MALLET’s stop words list, which allows the algorithm to ignore common “function words” (i.e., adverbs, conjunctions, pronouns, and propositions). The idea is to eliminate words that carry little thematic weight. Following a method outlined by Matthew Jockers and advocated by Belaid Moa, we also removed character names where possible. While it would likely be interesting to look at the ways MALLET reads texts without any intervention, for our purposes character names made it harder to express tendencies across the novels. However, we did not employ Jockers’s method in its entirety. In some cases, he uses a noun-based approach, eliminating all parts of speech except for nouns. But we felt that, at least for now, including verbs and adjectives was important for revealing aspects of narrative. Jockers also advocates chunking texts, but we were interested in the ways the algorithm would read entire novels as documents.

While our repository of modernist texts has been growing, we limited this preliminary study to a corpus of thirty-two early twentieth-century texts, formatted as TXT files, to come up with a profile of the most prominent topics identified by LDA. The algorithm is interested in finding the topics that can be used to correlate all the texts as well as the topics that can be used to distinguish between individual texts. The top three topics that are evenly distributed throughout the corpus—here showing the first nine words—are:

time, felt, day, looked, knew, work, face, hand, night
eyes, face, life, time, white, dark, round, hand, head
men, people, began, room, house, talk, suddenly, end, years

When reading these topics, we might want to consider that, according to the algorithm, these words are not only more frequent within the corpus, but have a greater chance of appearing near each other. As well, the top three or four words are considerably more heavily weighted than later words. Unsurprisingly, time seems to play a significant role in all the categories. Thus we might ask how categories each tell us something unique about time and temporality. In the first topic, the verbs are all in the past tense. Notably, the second topic arguably contains no verbs, with “face,” “hand”, “eyes,” and “head” possibly being exceptions. The fragmented body parts also reveal an interesting slippage between time embodied in humans and time embodied in objects. For instance, as Stephen Ross notes, the results do not distinguish between the face or hand of a clock and the face or hand of a human character. The third category suggests that critics might want to consider how the durations of events (especially their beginnings and endings) are situated, and how spatiotemporal concepts shift across texts. For instance, when and where do people become men? Or do houses and rooms become years?

MALLET also shows us the relative weight of these word collocations across the novels.

Temporality Past

Temporality Past

Temporality Embodied

Temporality Embodied

The Temporality of Place

The Temporality of Place

The third category, labelled “The Temporality of Place,” appears more prominently in Howard’s End, Mrs. Dalloway, A Passage to India, The Great Gatsby, and Heart of Darkness. We might ask how these texts in particular focus on the temporality of particular physical environments. On the other hand, I wonder why The Waves, Ulysses, and Women in Love do not seem to engage as fully with the first category, labelled “Temporality Past.” We might also ask how the novels of D. H. Lawrence seem to best exemplify the second category, labelled “Temporality Embodied.” Through LDA, do we get any sense of overlap between the ways people and objects embody time?

Building on MALLET’s algorithms, Belaid Moa has also written scripts that allow us to cluster texts according to perceived similarities and differences. Many readers will notice that Howard’s End and Mrs. Dalloway are similar when it comes to “The Temporality of Place,” but that topic 12 (people street feel leaves trees window room green door) is considerably more prevalent in Mrs. Dalloway than in Howard’s End. Moa’s script projects all these differences and similarities and allows us to see the texts clustered according to MALLET’s assigned topics.

The Multiple Dimensions of Modernist Topics

The Multiple Dimensions of Topics in Modernism

Given our current data set, these are the clusters we identified with LDA, with the exemplar being the text most central to that particular cluster:

Cluster 1, exemplar Tender is the Night:
The Awakening (Chopin), Heart of Darkness (Conrad), Lord Jim (Conrad), The Secret Agent (Conrad), The Great Gatsby (Fitzgerald), Tender is the Night (Fitzgerald), The Trial (Kafka), Babbitt (S. Lewis), Tarr (W. Lewis), 1984 (Orwell), Burmese Days (Orwell), The Autobiography of Alice B. Toklas (Stein), Twilight Sleep (Wharton)

Cluster 2, exemplar Ulysses:

Nightwood (Barnes), A Passage to India (Forster), Tess of the D’Urbervilles (Hardy), The Dubliners (Joyce), Portrait of the Artist as a Young Man (Joyce), Ulysses (Joyce)

Cluster 3, exemplar Seven Pillars of Wisdom:
Seven Pillars of Wisdom (Lawrence)

Cluster 4, exemplar Time Regained:
The Ambassadors (James), The Captive (Proust), Time Regained (Proust)

Cluster 5, exemplar Mrs. Dalloway:
The Good Soldier (Ford), Howard’s End (Forster), Sons and Lovers (Lawrence), Women in Love (Lawrence), Of Human Bondage (Maugham), Three Lives (Stein), The Picture of Dorian Gray (Wilde), Mrs. Dalloway (Woolf), The Waves (Woolf)

What subcategories of early twentieth-century modernism do these clusters suggest? How do we define these clusters for modernist literary criticism? Do they actually suggest anything, including temporal, geographic, or stylistic tendencies? How might these clusters compare with models constructed for, say, Victorian novels? These are questions we are also experimenting with, and we look forward to exploring further as we continue this work.


Post by Jana Millar-Usiskin in the ModVers category with the versioning tag. Images for this post care of Jana Millar-Usiskin.

]]>
./mmm/feed/ 1
Counting Virginia Woolf ./counting/ ./counting/#comments Thu, 10 Apr 2014 17:53:19 +0000 ./?p=4148 In my last post, “Making Modernism Big,” I ended by asking how a computer might read modernism. During the last few months, this question has informed the work I’ve been doing with computer scientist, Belaid Moa. In preliminary attempts to articulate an answer, Belaid and I have been exploring what is possible with Python, a flexible, extensible, and high-level programming language that allows us to give instructions to the computer, essentially teaching it how to read.

Using the texts of Virginia Woolf, Belaid and I are focused on a rather basic computational practice: counting. The computer excels at counting what it reads. Our computer—with the help of Python, Beautiful Soup (a machine parser), and a few regular expressions—can now count the highest frequency of words from The Voyage Out to Between the Acts, or the number of questions in each (805 in The Voyage Out, 473 in Between the Acts). It can find the frequencies of first words in each sentence of The Waves, or the last words. It can find the frequencies of words per HTML-encoded paragraph in Mrs. Dalloway (she, 6; she, 3; the, 10; a, 2; said, 3; etc.). The computer is eager to quantify, and this is great if we find value in knowing the numbers. But to what extent are the numbers important to human readers? Counting the word “war” in Virginia Woolf will not give us much insight into, say, the ways war and gender intersect in Mrs. Dalloway. At least for humans, counting indeed plays a small part in the usual sitting-down-with-a-book reading experience. While we might unconsciously register a repeated word or phrase, it is highly unlikely that anyone will count them as they go.

Still, there is something eerily fascinating about the high-frequency, small words that now captivate our learning computer. These are the words that will likely be most common for any text written in English. It would probably be hard to distinguish between modernist texts, or indeed any group of texts, based on these kinds of results. When thinking about machine learning, these filler words are important because they are usually the easiest to predict. Consider this beginning: “Mrs. Dalloway said she would buy…” Even if you (the human reader) weren’t familiar with the first line, there is a much higher chance you could predict the word that immediately follows (“the”) than the one that comes after that (“flowers”). Predictability is a key part of reading. However, knowing what to predict, a good reader can focus on the parts that are surprising and unpredictable. These small words might signal a kind of architectural structure around which the distinguishing features of literary edifices are often built. Thus, it might be productive to explore not only the ways modernist writers break this edifice apart but also how they reinforce familiar or predictable forms of language.

Counting Virginia Woolf

Question Frequencies in Woolf Novels

Going forward we are planning to use Python and other programming languages to further explore (un)predictability, with hopes of teaching our computer to recognize the giddy, exhilarating “plunge” of modernist language.


Post by Jana Millar-Usiskin in the ModVers category, with the versioning tag. Images for this post care of Jana Millar-Usiskin.

]]>
./counting/feed/ 1
Hands-On Textuality ./textuality/ ./textuality/#comments Tue, 19 Nov 2013 18:59:50 +0000 ./?p=3813 In addition to my z-axis research in the Maker Lab this year, I am working on a small-scale project for a scholarly indie game, developed in conjunction with the Modernist Versions Project and Implementing New Knowledge Environments. Soon we’ll have more details to share, but—while we have been researching and prototyping the game—I’ve been working through the connections between scholarly editing and videogame design.

Last spring, I conducted research on the editorial history of Marcel Proust’s unfinished nineteenth century novel, Jean Santeuil. Encoding the differences between the first two published editions of the novel, and using a tool called modVers to express the differences between those two editorial efforts, I suggested that the task of working through these editorial processes engages Proust’s modernist conceptions of temporal and individual development. As I described in my previous post on Jean Santeuil, versioning Proust’s unfinished novel did not simply allow me to read Proust’s modernist technique; it also allowed me to actively work through the genesis of that technique. This hands-on, procedural experience of encoding Proust forced me to unpack Proust’s fragmented construction of narrative chronology.

The fragment I encoded describes Jean’s trip to Penmarch during a stormy day. In the original draft pages of the book, Proust wrote two contradictory versions of the trip that were left unrevised. In one version of the passage, Jean travels by car to Penmarch; in another version, he encounters two women and a biker on a train (the biker comes back in a later section of the book). The first published edition of the novel, edited by André Maurios and Bernard de Fallois, argues that Proust intended to join these scenes through the process of revision, excising a paragraph and reordering the two passages such that Jean’s car journey becomes continuous with his travel by train. The scholarly edition of the novel, edited by Pierre Clarac and André Ferré, maintains the separation between these two accounts, preserving a fractured, contradictory, and unfinished narrative.

Encoding the difference between these two editorial interpretations of the passage works through Proust’s evolving construction of temporality. In order to structure the rearrangement of narrative events using TEI, I had to navigate between disparate sections of the document, copying and pasting variant chunks of text that appear in different sections of each version. As I navigated the spatial arrangement of the XML file to express the changing temporal arrangement of the narrative episode, the manual labor taken to structure the document fashioned an editorial experience of Proust’s modernist technique. I used location IDs to link fragments of text that appear in different sections of the document, constantly moving back-and-forth between different spatial and temporal permutations of the same episode. This is how Proust composed À La Recherche Du Temps Perdu, written in separate fragments that he connected systematically through the process of composition. His modernist technique, known for creating multiple scenes that recall and echo each other (embodied in his concept of involuntary memory), was fashioned by refining the process that remains unfinished in Jean Santeuil, interlinking temporally fragmented episodes into a harmonious and resonant narrative.

Modvers + Proust

Purely visual representations of the Penmarch fragment fail to capture this editorial experience. Working through Proust’s modernist temporalities requires an interactive and hands-on experience (an editorial experience) rather than a purely textual and visual representation (a readerly experience). That is, communicating my argument about the genesis of Proust’s modernist technique using digital methods calls for a dynamic and operable interface. Frameworks for this approach exist in the field of videogame design, and indie game development platforms offer tools for developing such an interactive scholarly experience. In September, merritt kopas led a workshop on videogame design as part of the Building Public Humanities project. For the workshop, she demonstrated game design fundamentals using Twine and her work with Construct 2. How could such design tools and principles be implemented in a scholarly context? In the instance of Proust, I’m considering how Twine could be used to produce a dynamic and interactive experience of the text that asks users to work through its spatial and temporal arrangement. Such an approach requires combining representation and design, reading and doing (if those acts can be neatly parsed). The product would not be an electronic text that moves the printed text onscreen, but rather an operable game that communicates the critical functions of scholarly editing. With this goal in mind, I’d like to unpack the connections between postmodern theories of textual editing and procedural rhetoric—connections that reveal a shared set of concerns, investments, and approaches across both textual criticism and game design.

Postmodern theories of textual scholarship examine the multiplicities of the work created by textual difference, emphasizing scholarly editing as an interpretive act that produces one among many possible views of the work. In his analysis of W. W. Greg, Sukanta Chaudhuri suggests: “Editorial action does not reduce or neutralize the unstable, expansive tendency of the text, but draws it into its own operation. Objective text-based criteria cannot finally yield an objective output. . . . Greg is implicitly proposing two principles: first, of editorial divergence as inherently derived from the textual material; and second, of reception as guiding the editorial function” (106). Reading Greg as an early postmodernist editor, Chaudhuri emphasizes the active work of editorial operations or functions upon the text that construct an interpretive, persuasive view of the work through the hands-on act of editing. Jerome McGann picks up the same language of function and operation in editorial practice in Radiant Textuality, where he writes:

In what I would call a quantum approach, however, because all interpretive positions are located at “an inner standing point,” each act of interpretation is not simply a view of the system but a function of its operations. . . . Its most important function is not to define a meaning or state of the system as such—although this is a necessary function of any interpretation—but to create conditions for further dynamic change within the system. Understanding the system means operating within and in the system. . . . “The Ivanhoe Game” was invented to expose and promote this view of imaginative works. (218-219)

Here, McGann proposes a more radical iteration of Greg’s early editorial operations, suggesting that, by viewing the work as a dynamic environment capable of multiple states, interpretations, or editorial “views,” the editor can create an operable, interactive system in which users can explore the multiple permutations of the original work. If Greg advocates “reception as guiding the editorial function,” then McGann proposes a model for deploying that function in electronic environments, where the dynamic, operable nature of electronic environments can communicate the dynamic operations of interpretive scholarly editing. Of course, leveraging the affordances of electronic environments to transform the practice of scholarly editing is not a new concept. The Ivanhoe Project, developed by Jerome Mcgann and Johanna Drucker, is one existing instance of a scholarly editing game. Elsewhere, Neil Fraistat and Steven Jones have explored textual operations in electronic environments through their concepts of “Immersive Textuality” or “architexturality.” John Bryant’s fluid text environment and D. F. McKenzie’s sociological approach also examine textual fluidity and multiplicity, while exploring the affordances of representing textual change in digital environments. Whereas these approaches focus on new spaces for scholarly editing online, I would rather look at the operation of textual interpretation itself, considering how the algorithmic and procedural operations of videogames offer not only new environments but also new interpretive mechanics that allow the operations of scholarly editing to function in new ways and engage with new audiences.

The concepts of functions and operations are key elements of videogame design, which uses game mechanics to structure interactions within the game world. Just as postmodernist conceptions of editing see the editorial function as an act that engages and transforms the work through material interactions, videogames offer hands-on operations through which the player instigates material change. As Alexander Galloway explains:

What used to be primarily the domain of eyes and looking is now more likely that of muscles and doing, thumbs, to be sure, and what used to be the act of reading is now the act of doing, or just “the act.” In other words, while the mass media of film, literature, television, and so on continue to engage in various debates around representation, textuality, and subjectivity, there has emerged in recent years a whole new medium, computers and in particular video games, whose foundation is not in looking and reading but in the instigation of material change through action. (4)

Galloway’s distinction between looking and doing, or between reading and acting, is complicated by theories of scholarly editing, which reveals textual operations as acts of interpretation and engagement that prompt material change in texts. Still, through his characterization of videogame interactions as dynamic operations that effect material change (as opposed to static acts of reading and looking), Galloway reveals deep ties between the operations of videogames and editing.

Whereas existing connections between scholarly editing and videogames emphasize a mutual investment in interpretation, performance, and multiplicity, I would instead like to consider the acts, themselves through which this multiplicity is realized. I want to suggest that if scholarly editing is premised upon material acts through which the editor crafts specific experiences or interpretations of the work, then videogame design offers new methods for expanding and communicating editorial functions. In other words, the dynamic operations of videogames can communicate and expand the operations of textual scholarship. Ian Bogost provides a method for using videogame design to craft editorial arguments through his concept of procedural rhetoric. As he explains:

Procedurality refers to a way of creating, explaining, or understanding processes. And processes define the way things work: the methods, techniques, and logics that drive the operation of systems. . . . Rhetoric refers to effective and persuasive expression. Procedural rhetoric, then, is a practice of using processes persuasively. . . . Procedural rhetoric is a technique for making arguments with computational systems and for unpacking computational arguments others have created. (2-3)

If scholarly editing reveals the methods, techniques, and processes of production that shape a given text (in order to produce a scholarly argument about the work), then it shares deep affinities with procedural rhetoric, which uses systematic ways of working through processes in order to craft interactive, operable arguments. As a design principle deployed to structure persuasive interactions with dynamic media, procedural rhetoric thus resonates (if only in part) with the methods and concerns of scholarly editing. The persuasive operations of videogame design and the interpretive operations of textual editing offer a rich overlap through which scholars can craft hands-on critical experiences that communicate textual arguments. How can we enrich and extend arguments about modernist technique through the algorithmic logic of game design? I look forward to sharing our findings in future posts.

Works Cited

Bogost, Ian. Persuasive Games: The Expressive Power of Videogames. Cambridge, MA: MIT Press, 2007. Print.

Chaudhuri, Sukantra. “W. W. Greg, Postmodernist.” Textual Cultures 4.2 (Autumn 2009): 102-110. Web. <https://www.jstor.org/stable/10.2979/TEX.2009.4.2.102>

Fraistat, Neil and Steven E. Jones. “Immersive Textuality: The Editing of Virtual Spaces.” Text Vol. 15 (2003), 69-82. Web. <https://www.jstor.org/stable/30227785>

Galloway, Alexander R. Gaming: Essays on Algorithmic Culture. Minneapolis: U of Minnesota Press, 2007. Print.

McGann, Jerome. Radiant Textuality: Literature after the world wide web. New York: Palgrave, 2004. Print.


Post by Alex Christie, attached to the ModVers category, with the versioning tag. Image for this post care of Alex Christie.

]]>
./textuality/feed/ 1
Making Modernism Big ./big/ ./big/#comments Wed, 16 Oct 2013 16:53:47 +0000 ./?p=3766 This semester, with the Modernist Versions Project (MVP), I have been creating a repository of modernist texts for the purposes of text analysis and machine learning. The scope of this project requires a powerful infrastructure, including hardware, software, and technical support, provided in part by Compute Canada, a high performance computing resource platform for universities and institutions across Canada. The plan is to first aggregate a significant number of modernist texts (in TXT format) and—once we have a working repository (in late 2013 / early 2014)—mobilize computer vision and machine learning techniques to infer as yet unseen patterns across modernism. Also, in collaboration with Adam Hammond (2012-13 MVP postdoc), we are exploring the possibilities of a Turing test for modernism. This test would follow our machine learning work and—if nothing else—be a playful experiment in the spirit of modernist artifice.

But producing a repository based on web-based materials is quite tricky. As part of his work for the Routledge Encyclopedia of Modernism, Stephen Ross has created an impressively thorough list of modernist authors that we will use to amass modernist texts housed across the web. As one might imagine, electronic texts are not always “clean,” and they don’t always have sufficient metadata. And even when repositories like Project Gutenberg Australia have relatively clean text files, their selection is limited due to copyright (among other reasons). As such, the version of modernism most people currently access through popular online repositories like Gutenberg Australia often doesn’t contain important works by notable women writers and people of colour. In Project Gutenberg Australia, there is no Nella Larsen, Zora Neale Hurston, or Langston Hughes. No Dorothy Richardson or Djuna Barnes. Formally, there is also no poetry. So we don’t get Ezra Pound or T. S. Eliot, either. Put differently, Gutenberg Australia’s version of modernism appears to be very different from the version most North American students will encounter in, say, a university course. That said, we are not relying on just one repository for this work, and we hope that scripts written in collaboration with Compute Canada will allow us to be comprehensive and equitable in our articulation of modernism, especially where difference is concerned. We also hope to fill in gaps where possible, either by adding our own texts to existing repositories or conducting more research on modernist writers who are not (yet) discoverable on the web. In this regard, we are especially inspired by digital humanities practitioners, Amy Earhart and Susan Brown.

To get started on this MVP project, I’ve begun meeting with Jentery Sayers, Stephen Ross, and Belaid Moa, who is one of Compute Canada’s HPC Specialists from the West Grid sector. Belaid has been extremely helpful, guiding me through the West Grid system and showing me how to develop a Python script that will grab modernist texts from an array of online repositories. Our script needs to locate the required texts within the HTML tree structure of the repository sites, download them, and store them in the Compute Canada database. In order to develop this script, I am learning the syntax and semantics of arrays, functions, strings, and regular expressions. As we are told over and over, code is a language, and so I look forward to becoming better versed in the intricacies of Python throughout the year.

For now, the idea is to start small (i.e., with twenty novels) and see how well the analysis scales up when more modernist texts are included. I chose the twenty novels we are tentatively planning to use based on availability, university syllabi, and MVP familiarity with them. We plan to run basic machine learning methods on this sample of texts in order to determine commonalities, differences, and tendencies across them. To be honest, I wonder whether the computer will be as confused about modernism as I am. I wonder if we’ll agree about what makes a passage important or interesting, whether we’ll get tripped up in the same sentences, whether we will come to the same conclusions about a given text. In short, how will this computational approach challenge the ongoing assumptions of literary scholars? More from me soon.


Post by Jana Millar Usiskin attached to the ModVers category, with the versioning tag. Image for this post care of Jana Millar Usiskin and Google Images.

]]>
./big/feed/ 3
Visualizing Ariel across Audio and Print ./ariel/ ./ariel/#comments Sun, 13 Oct 2013 16:00:55 +0000 ./?p=3711 Ariel is Plath’s finest collection of poetry, a potent and fierce publication that demonstrates the poet’s rhetorical prowess and aptitude to manipulate several registers of language to create probing, dramatic personae. As a collection, Ariel is also mutable entity. The collection, amassed from a manuscript entitled Ariel and Other Poems that Plath left behind in a black binder on her desk, exists in multiple forms; arrangements, additions, and deletions alter depending on the edition. There’s the British edition from 1965 published by Faber and Faber and edited by Ted Hughes, which was then followed by the American edition published in 1966 by Harper & Row, edited once more by Hughes but including Robert Lowell’s infamous introduction to Plath (the front cover features Lowell’s name in the same size font as Plath’s own). The UK and US editions of Ariel contain different selections of the poems as well as diverge from the manuscript Plath left behind. In 2004, HarperCollins published Ariel: The Restored Edition with a forward by the poet’s daughter, Frieda Hughes. The restored edition features the poems published according to the arrangement and selection of Plath’s original manuscript.

Beyond this question of which poems constitute Ariel, the BBC recordings of Plath’s Ariel-era poems embody further permutations of the oeuvre. Plath read for the BBC in October 1962, when many of these poems were nascent. The audio recordings not only showcase the intensely sonic quality of Plath’s poetry and her commanding voice and deep New England accent, but also disclose alternate versions of the lauded poems. In some cases the differences are slight and subtle: an extra rhyme or two slips into the broken alliterative lines of “Lady Lazarus.” In other cases, the audio recordings communicate entire sections of deleted material, as in the case of “Amnesiac.”

These discrete differences between audio and print versions of the poem demand critical attention.

Last summer, a conversation about versioning with Modernist Versions Project (MVP) players Tanya Clement, Martin Holmes, Susan Schreibman, and Jana Millar-Usiskin led to discussions about how audio figures within the enterprise of versioning. The case of the Plath recordings prompted the MVP to consider expanding the boundaries of versioning techniques and mechanisms to include audio files in addition to text files. This opening up of versioning to include audio helps articulate the mutability of poetry as well as emphasizes its deeply auditory dimension.

Visualizations of the Ariel poems that incorporate the author’s readings would afford a multi-faceted and richer context in which to understand and approach the poetry of Plath for study. Providing scholars and students with audio recordings of poetry in online environments jostles the visual biases of the humanities that are so often reproduced in digital humanities. Plus, poetry recordings permit valuable insights into literary history. For example, the BBC recordings of Plath prompt considerations of the relationship between radio (both as a medium and cultural institution) and poetry, and the status of Plath within national and literary communities. Further, the making available of these recordings for study possesses the potential to open the field of literary criticism to examine further aspects of a poem, such as its status as a performance. The expansion of digital tools to accommodate both textual and audio versions of a poem fosters more layers of interpretation and criticism with regard to sound as well as broadens traditional conceptualizations of poetry as solely print-based. Ideally, the MVP’s interface will articulate both the audio and textual lives of Ariel, visualizing the key discrepancies between print and audio at the level of content, but also offering scholars an environment in which to investigate the public life of a poem and the intensely performative qualities of Plath’s poetry.


Post by Adèle Barclay attached to the ModVers category, with the versioning tag. Image for this post care of Sylvia Plath Info.

]]>
./ariel/feed/ 1
MLab Returns from DH 2013 ./dh2013/ ./dh2013/#comments Mon, 22 Jul 2013 18:12:07 +0000 ./?p=3209 At the University of Nebraska-Lincoln last week, the Maker Lab and the Modernist Versions Project (MVP) had various opportunities to share their research during Digital Humanities 2013, which was a wonderful event. Special thanks to the Center for Digital Research in the Humanities for being such a fantastic host. The conference was incredibly well organized, and—across all sessions—the talks were thoroughly engaging.

Below is an abstract each for the long paper (“Made to Make: Expanding Digital Humanities through Desktop Fabrication”) and workshop (“From 2D to 3D: An Introduction to Desktop Fabrication”) I conducted in collaboration with Jeremy Boggs and Devon Elliott. The long paper and workshop both engaged the Lab’s desktop fabrication research. Also included is an abstract for the MVP’s short paper on its ongoing versioning research. That talk was delivered by Susan Schreibman on behalf of the MVP and was authored by Daniel Carter, with feedback from Susan, Stephen Ross, and me. Finally, during the Pedagogy Lightning Talks at the Annual General Meeting of the Association for Computers and the Humanities, I gave a very brief introduction to one of the Lab’s most recent projects, “Teaching and Learning Multimodal Communications” (published by the IJLM and MIT Press).

Elsewhere, you can view the slide deck for the “Made to Make” talk, fork the source files for that slide deck, and review the notes from the desktop fabrication workshop, where we also compiled a short bibliography on desktop fabrication in the humanities. To give the papers and workshop some context, below I’ve also embedded some tweets from the conference. Thanks, too, to Nina Belojevic, Alex Christie, and Jon Johnson for their contributions to aspects of the “Made to Make” presentation.

Made to Make

Title slide for the presentation. Click to view the entire slide deck.

“Made to Make: Expanding Digital Humanities through Desktop Fabrication”

Jentery Sayers, Assistant Professor, English, University of Victoria
Jeremy Boggs, Design Architect, Digital Research and Scholarship, Scholars’ Lab, University of Virginia Library
Devon Elliott, PhD candidate, History, Western University
Contributing labs: Scholars’ Lab (University of Virginia), the Lab for Humanistic Fabrication (Western University), and the Maker Lab in the Humanities (University of Victoria)
Slide deck | Source files

This paper presents substantive, cross-institutional research conducted on the relevance of desktop fabrication to digital humanities research. The researchers argue that matter is a new medium for digital humanities, and—as such—the field’s practitioners need to develop the workflows, best practices, and infrastructure necessary to meaningfully engage digital/material convergence, especially as it concerns the creation, preservation, exhibition, and delivery of cultural heritage materials in 3D. Aside from sharing example workflows, best practices, and infrastructure strategies, the paper identifies several key growth areas for desktop fabrication in digital humanities contexts. Ultimately, it demonstrates how digital humanities is “made to make,” or already well positioned to contribute significantly to desktop fabrication research.

Desktop fabrication is the digitization of analog manufacturing techniques (Gershenfeld 2005). Comparable to desktop publishing, it affords the output of digital content (e.g., 3D models) in physical form (e.g., plastic). It also personalizes production through accessible software and hardware, with more flexibility and rapidity than its analog predecessors. Common applications include using desktop 3D printers, milling machines, and laser cutters to prototype, replicate, and refashion solid objects.

To date, desktop fabrication has been used by historians to build exhibits (Elliott, MacDougall, and Turkel 2012); by digital media theorists to fashion custom tools (Ratto and Ree 2012); by scholars of teaching and learning to re-imagine the classroom (Meadows and Owens 2012); by archivists to model and preserve museum collections (Terdiman 2012); by designers to make physical interfaces and mechanical sculptures (Igoe 2007); and by well-known authors to “design” fiction as well as write it (Bleecker 2009; Sterling 2009). Yet, even in fields such as digital humanities, very few non-STEM researchers know how desktop fabrication actually works, and research on it is especially lacking in humanities departments across North America.

By extension, humanities publications on the topic are rare. For instance, “desktop fabrication” never appears in the archives of Digital Humanities Quarterly. The term and its methods have their legacies elsewhere, in STEM laboratories, research, and publications, with Neil Gershenfeld’s Fab: The Coming Revolution on Your Desktop (2005) being one of the most referenced texts. Gershenfeld’s key claim is that: “Personal fabrication will bring the programmability of digital worlds we’ve invented to the physical world we inhabit” (17). This attention to digital/material convergence has prompted scholars such as Matt Ratto and Robert Ree (2012) to argue for: 1) “physical network infrastructure” that supports “novel spaces for fabrication” and educated decisions in a digital economy, 2) “greater fluency with 3D digital content” to increase competencies in digital/material convergence, and 3) an established set of best practices, especially as open-source objects are circulated online and re-appropriated.

To be sure, digital humanities practitioners are well equipped to actively engage all three of these issues. The field is known as a field of makers. Its practitioners are invested in knowing by doing, and they have been intimately involved in the development of infrastructure, best practices, and digital competencies (Balsamo 2009; Elliott, MacDougall, and Turkel 2012). They have also engaged digital technologies and content directly, as physical objects with material particulars (Kirschenbaum 2008; McPherson 2009). The key question, then, is how to mobilize the histories and investments of digital humanities to significantly contribute to desktop fabrication research and its role in cultural heritage.

To spark such contributions, the researchers are asking the following questions: 1) What are the best procedures for digitizing rare or obscure 3D objects? 2) What steps should be taken to verify the integrity of 3D models? 3) How should the source code for 3D objects be licensed? 4) Where should that source code be stored? 5) How are people responsible for the 3D objects they share online? 6) How and when should derivatives of 3D models be made? 7) How are fabricated objects best integrated into interactive exhibits of cultural heritage materials? 8) How are fabricated objects best used for humanities research? 9) What roles should galleries, libraries, archives, and museums (GLAM) play in these processes?

In response to these questions, the three most significant findings of the research are as follows:

I) Workflow: Currently, there is no established workflow for fabrication research in digital humanities contexts, including those that focus on the creation, preservation, exhibition, and delivery of cultural heritage materials. Thus, the first and perhaps most obvious finding is that such a workflow needs to be articulated, tested in several contexts, and shared with the community. At this time, that workflow involves the following procedure: 1) Use a DSLR camera and a turntable to take at least twenty photographs of a stationary object. This process should be conducted in consultation with GLAM professionals, either on or off site. 2) Use software (e.g., 3D Catch) to stitch the images into a 3D scale model. 3) In consultation with GLAM professionals and domain experts, error-correct the model using appropriate software (e.g., Blender or Mudbox). What constitutes an “error” should be concretely defined and documented. 4) Output the model as an STL file. 5) Use printing software (e.g., ReplicatorG) to process STL into G-code. 6) Send G-code to a 3D printer for fabrication.

If the object is part of an interactive exhibit of cultural heritage materials, then: 7) Integrate the fabricated object into a circuit using appropriate sensors (e.g., touch and light), actuators (e.g., diodes and speakers), and shields (e.g., wifi and ethernet). 8) Write a sketch (e.g., in Processing) to execute intelligent behaviors through the circuit. 9) Test the build and document its behavior. 10) Refine the build for repeated interaction. 11) Use milling and laser-cutting techniques to enhance interaction through customized materials. If the object and/or materials for the exhibit are being published online, then: 12) Consult with GLAM professionals and domain experts to address intellectual property, storage, and attribution issues, including whether the object can be published in whole or in part. 13) License all files appropriately, state whether derivatives are permitted, and provide adequate metadata (e.g., using Dublin Core). 14) Publish the STL file, G-code, circuit, sketch, documentation, and/or build process via a popular repository (e.g., at Thingiverse) and/or a GLAM/university domain. When milling or laser-cutting machines are used as the primary manufacturing devices instead of 3D printers (see step 6 above), the workflow is remarkably similar.

II) Infrastructure: In order to receive feedback on the relevance of fabrication to the preservation, discoverability, distribution, and interpretation of cultural heritage materials, humanities practitioners should actively consult with GLAM professionals. For instance, the researchers are currently collaborating with libraries at the following institutions: the University of Virginia, the University of Toronto, York University, Western University, McMaster University, the University of Washington, and the University of Victoria.

By extension, desktop fabrication research extends John Unsworth’s (1999) premise of “the library as laboratory” into all GLAM institutions and suggests that new approaches to physical infrastructure may be necessary. Consequently, the second significant finding of this research is that makerspaces should play a more prominent role in digital humanities research, especially research involving the delivery of cultural heritage materials in 3D. Here, existing spaces that are peripheral or unrelated to digital humanities serve as persuasive models. These spaces include the Critical Making Lab at the University of Toronto and the Values in Design Lab at University of California, Irvine. Based on these examples, a makerspace for fabrication research in digital humanities would involve the following: 1) training in digital/material convergence, with an emphasis on praxis and tacit knowledge production, 2) a combination of digital and analog technologies, including milling, 3D-printing, scanning, and laser-cutting machines, 3) a flexible infrastructure, which would be open-source and sustainable, 4) an active partnership with a GLAM institution, and 5) research focusing on the role of desktop fabrication in the digital economy, with special attention to the best practices identified below.

III) Best practices: Desktop fabrication, especially in the humanities, currently lacks articulated best practices in the following areas: 1) attribution and licensing of cultural heritage materials in 3D, 2) sharing and modifying source code involving cultural heritage materials, 3) delivering and fabricating component parts of cultural heritage materials, 4) digitizing and error-correcting 3D models of cultural artifacts, and 5) developing and sustaining desktop fabrication infrastructure.

This finding suggests that, in the future, digital humanities practitioners have the opportunity to actively contribute to policy-making related to desktop fabrication, especially as collections of 3D materials (e.g., Europeana and Thingiverse) continue to grow alongside popular usage. Put differently: desktop fabrication is a disruptive technology. Governments, GLAM institutions, and universities have yet to determine its cultural implications. As such, this research is by necessity a matter of social importance and an opportunity for digital humanities to shape public knowledge.


Decimating Brian Croxall

Decimated Scan of Brian Croxall, produced during the workshop

“From 2D to 3D: An Introduction to Desktop Fabrication”

Jeremy Boggs, Design Architect, Digital Research and Scholarship, Scholars’ Lab, University of Virginia Library
Devon Elliott, PhD candidate, History, Western University
Jentery Sayers, Assistant Professor, English, University of Victoria
Notes from the workshop

Desktop fabrication is the digitization of analog manufacturing techniques. Comparable to desktop publishing, it affords the output of digital content (e.g., 3D models) in physical form (e.g., plastic). It also personalizes production through accessible software and hardware, with more flexibility and rapidity than its analog predecessors. Additive manufacturing is a process whereby a 3D form is constructed by building successive layers of a melted source material (at the moment, this is most often some type of plastic). The technologies driving additive manufacturing in the desktop fabrication field are 3D printers, tabletop devices that materialize digital 3D models. In this workshop, we will introduce technologies used for desktop fabrication and additive manufacturing, and offer a possible workflow that bridges the digital and physical worlds for work with three­ dimensional forms. We will begin by introducing 3D printers, demonstrating how they operate by printing things throughout the event. The software used in controlling the printer and in preparing models to print will be explained. We will use free software sources so those in attendance can experiment with the tools as they are introduced.

The main elements of the workshop are: 1) acquisition of digital 3D models, from online repositories to creating your own with either photogrammetry, 3D-scanning technologies, or modelling software; 2) software to clean and reshape digital models in order to make them print­-ready and to remove artifacts from the scanning process; and 3) 3D printers and the software to control and use them.

This workshop is targeted towards scholars interested in learning about technologies surrounding 3D printing and additive manufacturing, and for accessible solutions to implementing desktop fabrication technologies in scholarly work. Past workshops have been for faculty, graduate and undergraduate students in the humanities, librarians, archivists, GLAM professionals, and digital humanities centers. This workshop is introductory in character, so little prior experience is necessary, only a desire to learn and be engaged with the topic. Those attending are asked to bring, if possible, a laptop computer to install and run the software introduced, and a digital camera or smartphone for experimenting with photogrammetry. Workshop facilitators will bring cameras, a 3D printer, plastics, and related materials for the event. By the end of the conference, each participant will have the opportunity to print an object for their own use.


Versioning Prototype by D. Carter

Screengrab of a versioning engine prototyped by Daniel Carter

“Versioning Texts and Concepts”

Daniel Carter (Primary Author), University of Texas at Austin
Stephen Ross, University of Victoria
Jentery Sayers, University of Victoria
Susan Schreibman, Trinity College Dublin

This paper presents the results of the Modernist Versions Project’s (MVP) survey of existing tools for digital collation, comparison, and versioning. The MVP’s primary mission is to enable interpretations of modernist texts that are difficult without computational approaches. We understand versioning as the process through which scholars examine multiple witnesses of a text in order to gain critical insight into its creation and transmission and then make those witnesses available for critical engagement with the work. Collation is the act of identifying different versions of a text and noting changes between texts. Versioning is the editorial practice of presenting a critical apparatus for those changes. To this end, the MVP requires tools that: (1) identify variants in TXT and XML files, (2) export those results in a format or formats conducive to visualization, (3) visualize them in ways that allow readers to identify critically meaningful variations, and (4) aid in the visual presentation of versions.

The MVP surveyed and assessed an array of tools created specifically for aiding scholars in collating texts, versioning them, and visualizing changes between them. These tools include: (1) JuxtaCommons, (2) DV Coll, (3) TEI Comparator, (4) Text::TEI::Collate, (5) Collate2, (6) TUSTEP, (7) TXSTEP, (8) CollateX, (9) SimpleTCT, (10) Versioning Machine, and (11) HRIT Tools (nMerge). We also examined version control systems such as Git and Subversion in order to better understand how they might inform our understanding of collation in textual scholarship. This paper presents the methodologies of the survey and assessment as well as the MVP’s initial findings.

Part of the MVP’s mandate is to find new ways of harnessing computers to find differences between witnesses and to then identify the differences that make a difference (Bateson). In modernist studies, the most famous example of computer-assisted collation is Hans Walter Gabler’s use of the Tübingen System of Text Processing tools (TUSTEP) to collate and print James Joyce’s Ulysses in the 1970s and 1980s. Yet some constraints, such as those identified by Wilhelm Ott in 2000, still remain in the field of textual scholarship, especially where collation and versioning applications are concerned. Ott writes, “scholars whose profession is not computing but a more or less specialized field in the humanities have to be provided with tools that allow them to solve any problems that occur without having to write programs themselves. This leads to a concept of relatively independent programs for the elementary functions of text handling” (97). Indeed, the number of programs available for collation work have proliferated since 2000, including additions to TUSTEP (TXSTEP) as well as the newest web-based collation program, JuxtaCommons.

Accordingly, the MVP has reviewed tools currently available for collation work in order to provide an overview of the field and to identify software that might be further developed in order to create a collating, versioning, and visualization environment. Most of these tools were developed for specific projects, and thus do what they were designed to do quite well. Our question is whether we can modify existing tools to fit the needs of our project or whether a suite of collation and visualization tools needs to be developed from scratch. This survey is thus an attempt to chart the tools that may be useful for the kinds of collation and versioning workflows our team is developing specifically for modernist studies, so we can then test methods based on previous tools and envision future developments to meet emerging needs. Our initial research with Versioning Machine and JuxtaCommons suggests that there is potential for bringing tools together to create a more robust versioning system. Tools such as the Versioning Machine work well if one is working with TEI P5 documents; however, we are equally interested in developing workflows that do not rely upon the TEI, or do not require substantive markup. Finally, we are examining whether version control systems such as Git present viable alternatives to versioning methods now prevalent in textual studies.

Our method adapts the rubric Hans Walter Gabler devised for surveying collation and versioning tools in his 2008 white paper, “Remarks on Collation.” We first assessed the code and algorithms underlying each tool on our list, and we then tested each tool using a literary text. In this particular case, we used two text files and two TEI XML files from chapter three of Joseph Conrad’s Nostromo, which we have in OCR-corrected and TEI-Lite marked-up states from the 1904 serial edition and the 1904 first book edition. During each test, we used a tool assessment rubric (available upon request) to maintain consistent results across each instance. All tests were accompanied by research logs for additional commentary and observations made by our research team.

Our preliminary findings suggest that: 1) Many existing collation tools are anchored in obsolete technologies (e.g., TUSTEP, which was originally written in Fortran, despite having undergone major upgrades, still relies on its own scripting language and front end to operate; also, DV-Coll was written for DOS, but has been updated for use with Windows 7). 2) Many tools present accessibility obstacles because they are desktop-only entities, making large-scale collaborative work on shared materials difficult and prone to duplication and/or loss of work. Of the tools that offer web-based options, JuxtaCommons is the most robust. 3) The “commons” approach to scholarly collaboration is among the most promising direction for future development. We suggest the metaphor of the commons is useful for tool development in versioning and collation as well as for building scholarly community (e.g., MLA Commons). We note the particular usefulness in this regard of the Juxta Commons collation tool and the Modernist Commons environment for preparing digital texts for processing. The latter, under development by Editing Modernism in Canada, is currently working to integrate collation and versioning functions into its environment. 4) Version control alternatives to traditional textual studies-based versioning and visualization present an exciting set of possibilities. Although the use of Git, Github, and Gist for collating, versioning, and visualizing literary texts has not gained much traction, we see great potential in this line of inquiry. 5) Developers and projects should have APIs in mind when designing tools for agility and robustness across time. Web-based frameworks allow for this type of collaborative development, and we are pleased to see that Juxta has released a web service API for its users. 6) During tool development, greater attention must be given to extensibility, interoperability, and flexibility of functionality. Because many projects are purposebuilt, they are often difficult to adapt to non-native corpora and divergent workflows.


Post by Jentery Sayers, attached to the Makerspace, ModVers, and KitsForCulture projects, with the news, fabrication, and versioning tags. Featured images for this post care of Jentery Sayers, Jeremy Boggs, Devon Elliott, Brian Croxall, Daniel Carter, and the Digital Humanities 2013 website.

]]>
./dh2013/feed/ 1
New Poster: Humanities on the Z-Axis ./zposter/ ./zposter/#comments Mon, 24 Jun 2013 18:51:58 +0000 ./?p=3054 Last week, we released a poster for our “Kits for Cultural History” project (overview forthcoming), and today we are doing something similar, this time with our new “Z-Axis” research initiative, conducted in collaboration with the Modernist Versions Project (MVP). Like the Kits project, the Z-Axis initiative is in a nascent state. And it, too, will be one of the Maker Lab’s primary research areas during 2013-14. At CSDH/SCHN 2013, we presented a poster about it, and that poster is provided below as a low-resolution PNG. It is also available in PDF. Feel free to use either format for educational purposes. Soon we will publish an overview of the project here at maker.uvic.ca, but in this post I’ll touch briefly on its goals and motivations.

Humanities Fall on the Z Axis

As the poster suggests, the Z-Axis initiative underscores the relevance of subjective encounters with data, with an emphasis on 3D modeling, prototyping, and desktop fabrication techniques. If, as humanities practitioners, we want to express our texts, media, and other cultural artifacts as data, then how are they felt? How are they experienced? What gives them texture? How are they welded or connected to embodied practices like reading and interpretation? Building upon research by Johanna Drucker, Kari Kraus, Mei-Po Kwan, Jerome McGann, Lev Manovich, Franco Moretti, Bethany Nowviskie, Stephen Ramsay, and Lisa Samuels (among others), the project explores speculative articulations of data with context, pushing methods like cultural analytics, programmatic ruination, and algorithmic criticism along the z-axis of data expression—through that third “variable” we in the humanities often use to stress the bodies, perspectives, and technologies that frame or situate otherwise abstract understandings of time and space.

To be sure, the Z-Axis initiative falls quite clearly on the “interpretive” end of the digital humanities spectrum. In other words, the maps, models, and 3D fabrications we are proposing will not—even superficially—present themselves as naturalized documentations or isomorphic replicas of actuality. They are not re-presentations or re-sources. Instead, they inject lived social reality into the workflows of data expression in order to assert the various ways in which the stuff of history is culturally embedded. For example, our initial prototype asks how readers might (consciously or unknowingly) experience Joyce’s Ulysses as a profoundly geospatial text. Which parts of Dublin does Joyce privilege over others? What are the geospatial biases of the novel? If readers interpret the novel as a map of Dublin, then what impressions does it leave? These questions are not particularly interested in the geospatial accuracy of Ulysses or its placed-based references. Instead, they are curious about how Ulysses transduces actual Dublin into fictitious Dublin, under what assumptions, and to what effects. You might say the Z-Axis project blends cartographic imaginations with cinematic impulses, or views from above with views on the ground. And this blend is anchored in the examination of how people (are at least intended to) interface with things like novels, maps, screens, and 3D prototypes.

Wondering how exactly the Ulysses prototype was made? Check out “Workflow for the 3D Map,” published by Maker Lab and MVP team member, Katie Tanigawa. As Katie acknowledges on her site and elsewhere, the workflow needs to be revised and improved through some additional research. However, at this juncture I will highlight how it brings print materials from the University of Victoria’s Special Collections into conversation with emerging digital methods, to then output an argument (i.e., a map) off the screen, in 3D. Not only does the workflow involve digitizing maps and encoding electronic versions of Ulysses with geospatial tags. It also systematically displaces and warps those maps using 3D sculpting software. Once they are watertight models, these maps can be printed using desktop fabrication software and hardware. Above, the poster only gives an elliptical sense of what a printed map would look like, by providing multiple bird’s eye views (in the “Z-Axis Methods” and “Initial Prototype” sections) and one worm’s eye view in the footer. Importantly, these images are born-digital; they are not photographs of an actual, printed prototype. Still, through the use of the z-axis (to express reading-time, for instance), the combination of these views would, in practice, foster attention to both pattern and texture. Yes, we still have some work to do. Nevertheless, we believe the prototype prompts a number of worthwhile research trajectories, especially where critical making meets modernist studies.

At this early stage in the project, we are wrangling with a number of important and difficult questions, including: How do the affordances of screen-based visualizations differ from printed models, and with what implications for literary and cultural criticism? Through what other forms or media (e.g., timelines) might z-axis methods be mobilized? Echoing Ramsay and Rockwell, how can these built media act as forms of scholarship? As standalones without essays that explain or rationalize them? How can z-axis methods be developed for comparative approaches to “versions” of modernism? To, say, the geospatial tendencies of modernist novels about either particular places (e.g., London, Paris, or New York) or travel/migration (e.g., Dos Passos’s U.S.A. trilogy)? When novels are still covered by copyright, how can data visualization and desktop fabrication spark “non-consumptive” interpretations (where maps, and not primary texts, are released online by research groups like the MVP)? How can fields like modernist studies actively generate new data about or from literary history (e.g., geodata related to reading-time)? And finally, how does data related to word count correspond with reading experience? When georeferencing a novel, what inferences (if any) can be drawn about the time spent attending to “difficult” text (and parts of a text)? What’s lost at scale, especially when we compare novels or chapters? In short, is word count a reliable indicator of time spent reading, or at least a persuasive way of getting at how novels are experienced over time?

During 2013-14, we’ll publish our responses here. For now,  I want to thank everyone who has already contributed to the Z-Axis initiative: Alex Christie and Katie Tanigawa (two of our primary researchers), Stephen Ross (director of the MVP), Arthur Hain (for initially framing the research through the z-axis concept), and English 507 (who provided feedback during the Spring 2013 semester). Also, thanks to Nina Belojevic, Alex Christie, and Jon Johnson for working with me to make the “Humanities Fall on the Z-Axis” poster for CSDH/SCHN 2013.


Post by Jentery Sayers, attached to the ModVers and ZAxis projects, with the fabrication and versioning tags. Featured image for this post care of Nina Belojevic, Jon Johnson, and Jentery Sayers.

]]>
./zposter/feed/ 3
Building Play into Scholarly Communication ./rpbi/ ./rpbi/#respond Thu, 20 Jun 2013 17:00:00 +0000 ./?p=884 What if publications offered readers a chance to play with the data (or capta) expressed in their visualizations, to challenge or confirm an article’s claim(s), to engage more directly in tacit learning and communal scholarship? What if an article laid bare not only its methods, but also its workflows, its data, and its code, and then let people play, tinker, or otherwise experiment with its data and graphical expressions? What if it even challenged people to “break” or modify its argument in order to further (or maybe redirect) its line of inquiry? What might this sort of publication look like? How would it work? And is this possibility slightly scary?

To begin addressing these questions, I’m imagining an article in an online publication like DHQ or NANO that would articulate a particular argument, and—instead of providing static images of data visualizations—it would embed a rich-prospect browsing interface (RPBI) like the Mandala browser into the argument itself. The data used in the article would be loaded into the browser, and audiences would be asked to play with that data, which the journal could also make publicly available (perhaps in various formats). For a first-year PhD student new to digital humanities, this possibility is both exciting and a bit terrifying. Among other things, this model is just begging for people to disprove, or at least rigorously test, your article’s claim and its corresponding data. Who wants a “break” button built into their scholarship?

In response to this question and its impulse, I think there’s another way to frame the integration of RPBIs into scholarly communication, and that is to think about the degree to which embedded browsers and other such tools for expression could help render our scholarship more persuasive and our data more transparent. This possibility would ask scholars to come together not just to break an argument, but to improve it. Such a model offers the research up to some playful and tacit hypothesis-testing that could afford new and unexpected insights in the humanities.

Of course, several questions emerge, including: How do you cite a source inside an RPBI? How reliable is the data in the first place? Where necessary, how could that data be made more reliable, or more persuasive? How much access do you give to audiences? And who is ultimately the author of an article that asks for audience engagement and creation? To be honest, I’m not entirely sure how to address these questions without actually enacting this mode of publication, so perhaps that is the next step. Write an article in which you share your data, outline the rationale for how the data was produced, and embed an RBPI so that users can play with your argument. More soon!


Post by Katie Tanigawa, attached to the ModVers project, with the versioning tag. Featured image for this post care of Katie Tanigawa’s use of the Mandala browser.

]]>
./rpbi/feed/ 0
Returning to the Periodical Context ./periodical/ ./periodical/#comments Fri, 14 Jun 2013 15:25:16 +0000 ./?p=886 Periodicals have been on my mind quite a bit in the last few months. Not only have I continued to work with Nostromo in both its serial and volume witnesses, but I also took a fantastic UVic English seminar taught by Dr. Lisa Surridge. During the seminar, we explored the relationship between text and image in Victorian literature. Quite often, we considered what Mark Turner calls the “periodical context” of the texts, looking at the other articles, images, and even ads that ran alongside our primary texts of study.

Work for this course frequently took me away from the screen, away from markup and visualizations, and into the archives. I flipped through the un-digitized pages of the Graphic, the Illustrated London News, and the Yellow Book, reading without a search function or the ability to run the text through an XSLT that spits out aggregate data (or capta) about the contents/contexts of the periodical. My hands were occasionally ink-stained, and my eyes watered in the semi-public setting of UVic’s Special Collections when I read the serial conclusion to Tess of the D’Urbervilles. But what was most important for me was that I began to reconsider serial fiction in its material context. One of the questions the class explored was how the surrounding contents of the periodical affect a reading of the text being examined.

The work that I do with Nostromo currently separates the serial text from its periodical context. Cedric Watts and Xavier Brice have already explored this avenue. Sites like Conrad First even archive entire issues of T.P.’s Weekly in order to show the periodical context of Nostromo. My other ongoing assumption has been that considering the periodical context was separate from my work in versioning the text. After Dr. Surridge’s course, I’ve changed my mind. How can I afford not to consider the periodical context of Nostromo? How can the serial Nostromo be re-envisioned as a text that does not simply lack what is present in the volume edition? How can the serial be re-envisioned as a text that contains what the volume edition cannot because of the periodical context in which the serial publication is necessarily embedded?

As I’m thinking about new directions for my research, in part by moving from the 1904 Harper volume edition to the more historically and contextually distant 1918 Dent volume edition, I also have to return to T.P.’s Weekly. I have to merge digital versioning practices with analogue periodical studies practices. Needless to say, I am looking forward to this merger of critical practice.


Post by Katie Tanigawa, attached to the ModVers project, with the versioning tag. Featured image for this post care of Conrad First’s Joseph Conrad Periodical Archive. 

]]>
./periodical/feed/ 1