Tiffany Chan – MLab in the Humanities . University of Victoria Thu, 02 Aug 2018 16:59:24 +0000 en-US hourly 1 https://wordpress.org/?v=4.9.12 ./wp-content/uploads/2018/03/mLabLogo-70x70.png Tiffany Chan – MLab in the Humanities . 32 32 Histories of Digital Labour: Early OCR ./mla2017/ ./mla2017/#respond Sat, 07 Jan 2017 18:36:49 +0000 ./?p=6775 Jentery and I just returned from giving a talk at the 2017 Modern Language Convention in Philadelphia. It was titled, “Early Histories of OCR (Optical Character Recognition): Mary Jameson and Reading Optophones” and was part of the “Histories of Digital Labor” panel convened by the MLA Committee on Information Technology and organized by Shawna Ross. Thank you, Shawna!


Post by Tiffany Chan and Jentery Sayers, attached to the KitsForCulture and ReadingOptophone projects, with the fabrication, news, and physcomp tags.

]]>
./mla2017/feed/ 0
The Reading Optophone Kit ./rokit/ Wed, 23 Nov 2016 23:46:59 +0000 ./?p=6743 In November 2015, the MLab began work on remaking a reading optophone as the third volume in the Kits for Cultural History series. The optophone was a reading aid for the blind that converted print into audible tones during the twentieth century. After significant practice and education, operators learned to distinguish patterns of tones as words or phrases. Here’s a video demonstrating how a reading optophone scanned type.

The "Optophone": a reading device for the blind. Credit: Wellcome Library, London and Wellcome Images

The “Optophone” (ca. 1921): a reading device for the blind. Credit: Matthew Rubery, Heather Tilley, Wellcome Library, London, and Wellcome Images.

Today, optophones are interpreted as precursors to optical character recognition (OCR), or the automated conversion of images into machine-readable text (e.g., Google uses OCR to make large amounts of digitized print material searchable on the web). Many origin stories about the optophone stress its invention without attending to key figures and contributions involved in using, maintaining, and developing the reading optophone over time.

Mary Jameson reading Anthony Trollope’s The Warden on an optophone, ca. 1921, care of Blind Veterans UK.

Mary Jameson reading Anthony Trollope’s The Warden on an optophone, ca. 1921, care of Blind Veterans UK.

For example, Mary Jameson was one of the optophone’s earliest and longstanding users and demonstrators. But as Victoria, Jentery, and I have argued elsewhere, existing descriptions of Jameson’s work diminish her contributions to the reading optophone’s development. Prototyping the optophone highlights Jameson’s unrecognized labour and that of other optophone users in ways that archival materials, current scholarship, and popular accounts do not. For more on the prototyping process and its implications, see my talks at HASTAC 2016 and Digital Humanities 2016. We have also created a repository for the Reading Optophone Kit.

Research Leads, Contributors, and Support

Since 2015, the following researchers have contributed to the Reading Optophone Kit: Tiffany Chan, Katherine Goertz, Evan Locke, Danielle Morgan, Victoria Murawski, and Jentery Sayers. Many thanks to Robert Baker (Blind Veterans UK), Mara Mills (NYU), and Matthew Rubery (Queen Mary University London) for their support and feedback. The Social Sciences and Humanities Research Council, the Canada Foundation for Innovation, and the British Columbia Knowledge Development Fund have supported this research.

Tracing letters and punctuation marks to create a Python script for the reading optophone kit

Tracing letters and punctuation marks to create a Python script for the Reading Optophone Kit

Project Status

This project is ongoing, with plans for completion and exhibition in 2017. For more on the project as it develops, see the stream of posts below. You may also visit our reading optophone repository, which contains code and other associated files.


Post by Tiffany Chan, attached to the KitsForCulture project, with the projects, fabrication, and physcomp tags.

]]>
MLab in Interactions ./interactions/ ./interactions/#respond Tue, 22 Nov 2016 02:40:57 +0000 ./?p=6724 The MLab is featured in the latest (Nov-Dec 2016) issue of ACM Interactions, a bimonthly publication about design and human-computer interaction. There, Jentery published a short piece titled, “Design Without a Future,” featuring our research on the Early Magnetic Recording Kit as well as four photographs by Danielle.

Designing without a future positions prototyping as a negotiation with histories of media rather than as a speculation about possible futures. It also recognizes how many of the technologies we remake are no longer accessible and likely never will be again: they are broken, lost, missing, or not in circulation. Remaking them is thus about the contingencies of experience and interpretation, not ideal forms or designs.

Thanks to Daniela Rosner for feedback on drafts of this publication. A photograph of the cover is above.


Post by Tiffany Chan, attached to the KitsForCulture and EarlyMagneticRecording projects, with the news tag. Featured image care of Interactions. 

]]>
./interactions/feed/ 0
Designing for Difficulty ./diff/ ./diff/#respond Sun, 31 Jul 2016 19:57:07 +0000 ./?p=6421 In July, I gave a talk on prototyping and remaking old technologies at DH2016 in Kraków, Poland. In that talk, titled “Designing for Difficulty,” I argued that we often privilege ease-of-use or neatness in how we design technologies and study the narratives surrounding them. Difficulty is not always ideal or desirable, but I argue that we should attend more carefully to who or what is ignored when we ignore or dismiss difficulty out of hand. Prototyping old technologies can help us design with difficulty and uncertainty in mind—to see where messiness does not get in the way of meaning but where it becomes, itself, meaningful.

Consider the process of remaking an optophone, a reading aid for the blind developed and used during the 20th century (1910s-60s) that converted text into sound. Here, difficulty emerges in three ways: at the levels of 1) the archive, 2) trial-and-error remaking or rapid prototyping, and 3) interface design.

To create a schema for converting text to sound, I printed out, traced, and diagrammed letterforms to determine which tones played and for how long. This process is fraught with critical decisions. For example, a serif font might return different results than a sans-serif font, and it’s more than likely that the type produced by my word processor differs significantly from typefaces commonly produced during the optophone’s use. In cases where letters contain sloped lines that ascend or descend (e.g., A, K, M), splitting the letters into more segments would create a smoother transition from high to low tones or vice versa—similar to how having more pixels in an image makes the image appear smoother overall. (For more on the process of converting type to sound, see my previous post on the optophonic schema.)

But there are also cases where my schema differs from the diagrams found in historical documents. These differences could be a function of my word processor, or of a particular optophone illustrated by a historical diagram. But the more informative realization might be that such diagrams were probably, in one way or another, simplifications. To put it differently, historical sources (written or otherwise) are interpretations in and of themselves, subject to redaction, occlusion, or omission. Framed for particular audiences, they are imbued with values and biases that may only have emerged through past testing, use, and experimentation in the same way that they emerge through prototyping now.

To prototype the tracer (the optophone mechanism that converted printed material into an electrical signal and then into sound) and simulate its behaviours, we’re using a Raspberry Pi and a camera similar to a laptop cam. However, there are aspects of the tracer that cannot be expressed through simulation. For example, operators of early optophones could select the pace and location of reading with a much higher degree of control than is possible with our current setup. They could scan the same spot repeatedly or move the handle to a different reading location at any point in the process. By contrast, the Raspberry Pi prototype (which is designed to scan discretely, character by character, instead of continuously) cannot account for this kind of fine-grained control or manual feedback. By this measure, it is, perhaps ironically, the newer technology that seems inefficient and inadequate. Exposing the limitations of current technologies in recreating past ones disputes the common assumption that innovation progresses teleologically or straightforwardly towards a brighter future.

Moreover, we might ask what an appropriate technical goal or stopping point might be in terms of usability. One impulse might be to design and recreate an optophone where tones would be as easy to distinguish and interpret as possible. But this desire, to remake an ideal optophone, aligns with commercial attitudes towards technology that require devices to be rigorously optimized. By this logic, technologies are—and always should be—faster, cleaner, and easier to use than their predecessors; that is the stuff of progress. But such an optophone never existed, at least according to historical accounts.

Historical sources suggest that Mary Jameson, after two hundred hours of practice, was able to read at a speed of one word per minute at a public demonstration. After years of practice, she was able to read sixty words per minute while most operators were able to read at a speed of around twenty words per minute. Using the optophone required an enormous amount of sustained practice, effort, and labour, which goes unrecognized except as a measure or proof of the optophone’s efficiency.

Moreover, sighted people concentrated on the conversion process itself (of text to sound), at the expense of the other processes involved in setting up and operating the optophone frame. For example, before an operator could even begin listening to tones, she or he would have to plug in the optophone, place the book on the frame and line up the tracer with the first line of type, and “tune” the optophone with a knob (presumably to the correct font height). Furthermore, certain design features of the optophone suggest that blind operators were able and expected to operate it independently, without help. For instance, the connectors (plugs) on the sides of the frame were different sizes so that they couldn’t be confused, and at least one knob had nicks to enable the operator to count how far it had turned.

Illustration of the Optophone's Features (care of Scientific American, DATE)

Illustration of the Optophone’s Features (care of S.W. Clatworthy, in Popular Science Monthly, 1920)

When prototyping or remaking, we are often faced with instances of limitation, frustration, absence, and surprise: aspects of history we cannot access or know for sure. Prototyping the past might be one way to approach such instances, not necessarily as a problem of technical ability or engineering—that is, not as a problem of tools or competence—but as a deeply cultural question, or even as a question of design. When and how might we want to communicate difficulty or labour? When might failure and frustration not be barriers to overcome, but rather ways to probe and understand the contexts that shape our limitations and how we came to understand them? We might also approach the question of absence or limitation as an ethical question, or a question of care.

Mary Jameson Demonstrating the Optophone

To consider prototyping as a way of doing history is to also consider how prototyping may challenge historical narratives by rewriting the record and attending to previously ignored figures such as Mary Jameson. Jameson (who the historical documentation only describes as a “user” or demonstrator, if it mentions her at all) played a central role in optophone development that is unrecognized in histories of the optophone and optical character recognition (OCR) today. Prototyping the optophone with Jameson in mind offers us a way to stress maintenance, development, and incremental change in contrast to masculine, “make or break” narratives of innovation and hyperbole.

Asserting Jameson as a key developer in the history of OCR counters dominant myths about innovators: who they typically are and in what contexts innovation emerges. In fact, prototyping becomes a way to interrogate innovation itself, including, as in the case of the optophone, where it intersects with issues of gender and ability and their role in practice, both yesterday and today.


Post by Tiffany Chan, attached to the KitsForCulture, with the physcomp and fabrication tags. Featured image care of Tiffany Chan. Thanks to Robert Baker (Blind Veterans UK), Mara Mills (New York University), and Matthew Rubery (Queen Mary University of London) for their support and feedback on this research. Research conducted with Katherine Goertz, Danielle Morgan, Victoria Murawski, and Jentery Sayers.

]]>
./diff/feed/ 0
New Repository: Optophone Kit ./optorepo/ ./optorepo/#respond Thu, 21 Jul 2016 23:14:02 +0000 ./?p=6410 The MLab now has a GitHub repository for the Optophone Kit (part of the Kits for Cultural History series). Like the Kit, the repository is a work in progress and will be updated as the project evolves.

At the moment, Version 1.1 of the repo has three separate Python scripts for three different functions: taking an image of text and making it machine-readablegenerating tones for playback, and matching and playing the corresponding tone for each character. It also has instructions for using or modifying the script to make and/or play your own optophonic sounds.


Post by Tiffany Chan, attached to the KitsForCulture project, with the physcomp and news tags.

]]>
./optorepo/feed/ 0
Designing for Difficulty: MLab at DH2016 ./dh2016/ ./dh2016/#respond Tue, 12 Jul 2016 03:55:43 +0000 ./?p=6376 As part of our Kits for Cultural History (KCH) project (supported by the Social Sciences and Humanities Research Council of Canada), I am speaking at Digital Humanities 2016 in Kraków, Poland this week. Here are my slides for the talk, and below is the first slide together with my references. The talk addresses all three projects in the KCH series (early wearables, early magnetic recording, and early optophonics); however, it focuses on the reading optophone, which I’ve been developing in the lab during the last few months. It is an honour to present for the first time at the annual Digital Humanities conference, and I’ll publish more about the reading optophone soon.

Digital Humanities 2016, "Designing for Difficulty"

Chan, Tiffany, Victoria Murawski, and Jentery Sayers. “Remaking Optophones: An Exercise in Maintenance Studies.” Gayle Morris Sweetland Centre for Writing, 14 Mar. 2016. Web. https://www.digitalrhetoriccollaborative.org/2016/03/14/remaking-optophones-an-exercise-in-maintenance-studies/

Cooper, Franklin S., Jane H. Gaitenby, and Patrick W. Nye. “Evolution of Reading Machines for the Blind: Haskins Laboratories’ Research as a Case History.” Journal of Rehabilitation Research & Development 21.1 (1984): 51-87. Web. https://www.haskins.yale.edu/Reprints/HL0455.pdf

Fournier d’Albe. The Moon Element: An Introduction to the Wonders of Selenium. New York: D Appleton and Company, 1914. Web. https://archive.org/details/moonelement002067mbp

Hendren, Sara. “All Technology is Assistive: Six Design Rules on ‘Disability’.” Backchannel. Medium.com. https://backchannel.com/all-technology-is-assistive-ac9f7183c8cd

Jameson, Mary. “The Optophone: Its Beginning and Development.” The Sixth Technical Conference on Reading Machines for the Blind. 27-28 Jan. 1966, Washington, D.C. Bulletin of Prosthetics Research, 1966. Web. https://www.rehab.research.va.gov/jour/66/3/1/25.pdf

—. “The Optophone or How the Blind May Read Ordinary Print by Ear.” …And There Was Light 1.4 (1932): 19-22. Web. https://archive.org/stream/andtherewasli01unse#page/18/mode/2up

Mills, Mara. “Optophones and Musical Print.” Sounding Out (2015): n.pag. Web. https://soundstudiesblog.com/2015/01/05/optophones-and-musical-print/

Morgan, Danielle. “Jacob: Recording on Wire.” Maker Lab in the Humanities. MLab (University of Victoria), 2 Jun. 2016. Web. ./jacob/

Rubery, Matthew. “‘How We Read’: The Optophone.” Online video clip. YouTube. YouTube, 23 Aug. 2015. Web. https://www.youtube.com/watch?v=w0wuIv1JVGU

—. “Reading Machines.” How We Read: A Sensory History of Books for Blind People. 2016. Web. https://www.howweread.co.uk/gallery/reading-machines/

Sayers, Jentery. “Kits for Cultural History.” Hyperrhiz 13 (2015): n.pag. Web. https://hyperrhiz.io/hyperrhiz13/

—. “Prototyping the Past.” Visible Language 49.3 (2015): n.pag. Web. https://visiblelanguagejournal.com/issue/172/article/1232

—. “Why Fabricate?” Scholarly Research and Communication 6.3 (2015): n. pag. Web. https://src-online.ca/index.php/src/article/view/209/428


Post by Tiffany Chan, attached to the KitsForCulture project, with the physcomp, news, and fabrication tags. Featured image care of DH 2016. Thanks to Robert Baker (Blind Veterans UK), Mara Mills (New York University), and Matthew Rubery (Queen Mary University of London) for their support and feedback on this research. Research conducted with Katherine Goertz, Danielle Morgan, Victoria Murawski, and Jentery Sayers.

]]>
./dh2016/feed/ 0
An Optophone Schema: From Text to Sound ./schema/ ./schema/#respond Mon, 04 Jul 2016 22:46:08 +0000 ./?p=6348 For our optophone kit in the Kits for Cultural History series, I’m working on the automated conversion of text into sound. In a common version of the optophone, this conversion was executed by a “tracer.” An operator would use an attached handle to move the tracer from left to right, scanning type as it went.

Image of a tracer from a 1920 issue of <em>Scientific American</em>

Image of a tracer from a 1920 issue of Scientific American

Tracers used selenium to not only detect changes in light—for example, the difference between black type and white page—but also translate these differences into electrical signals and then into continuous streams of sounds or silence. As an optophone scanned across a page, an operator interpreted the pattern of sound or silence across time as specific characters or words.

To remake a tracer, I’m using a Raspberry Pi (RPi) and a small camera similar to a laptop camera (see photograph below). The camera takes a picture of the text and passes the image to an optical character recognition (OCR) program, which converts the image into a string of characters (see script).

Optophone Prototype with an RPi

Optophone Prototype with an RPi

Using the Python programming language, the RPi then matches each character to a sound file in a pre-generated dictionary of sounds (see script). The sounds are played through a pair of headphones connected to the RPi. To make a custom dictionary, I printed a series of letters and common punctuation marks and then traced over them (see first image below) according to Patrick Nye’s diagram (see second image below) of one particular optophone schema. (The schema changed across versions of the optophone.)

Tracing Letters and Punctuation Marks

Image care of Mara Mills, in <a href="https://soundstudiesblog.com/2015/01/05/optophones-and-musical-print/">"Optophones and Musical Print"</a>

Image care of Mara Mills, in “Optophones and Musical Print”

Each part of the letter is keyed to a different frequency, and the pattern of frequencies distinguishes one character from another. In Python, I converted this pattern into tones and then into sound files (see script). See below for a video demonstrating what “Type” may have sounded like according to this optophonic method. The next step is to express an entire page of type as a series of tones.


Post by Tiffany Chan, attached to the KitsForCulture project, with the physcomp and fabrication tags. Featured image care of Tiffany Chan. Thanks to Robert Baker (Blind Veterans UK), Mara Mills (New York University), and Matthew Rubery (Queen Mary University of London) for their support and feedback on this research. Research conducted with Katherine Goertz, Danielle Morgan, Victoria Murawski, and Jentery Sayers.

]]>
./schema/feed/ 0
The Optophone at HASTAC 2016 ./optophone/ ./optophone/#respond Sat, 21 May 2016 00:36:22 +0000 ./?p=6309 Earlier this month, I went to HASTAC 2016 at Arizona State University (ASU) to present MLab research on the optophone. The optophone was a reading aid for the blind that, beginning in the 1910s, converted text into sound. It was also a precursor to optical character recognition (OCR) technology. In my talk, I stressed how prototyping the optophone helps us better understand what is missing or what we cannot know for sure about it. These gaps are important to keep in mind because—at least to our knowledge—no stable, working version of the optophone exists today. But more important, the prototyping process stresses absences in the historical record—traces of people, agents, and labour that have been ignored, lost, destroyed, or otherwise made inaccessible.

As Victoria, Jentery, and I discuss elsewhere (based on research we conducted with Danielle and Katherine), Mary Jameson was a key developer of early optophones despite the fact that historical sources diminish her contributions to media history by referring to her as a demonstrator or user. Prototyping the optophone becomes a way to better understand how Jameson (and others like her) may have contributed to the optophone and developed it over time without assuming we can ever inhabit or recover that labour. It also becomes a way to write Jameson into a historical record that contains significant omissions, exaggerations, and distortions. For example, sources written by sighted people—as well as current historical narratives about the optophone—focus on the conversion of text into sound, at the expense of the demanding procedures involved in setting up and navigating the various components of the optophone’s frame.

Optophone Frame

Photograph of an optophone care of Robert Baker and Blind Veterans UK

Moreover, sources such as The Moon Element, by E.E. Fournier d’Albe (credited as the optophone’s inventor), also tend to reference reading speed as the ultimate measure of success and the means by which all blind people’s problems would be solved. Although historical documents suggest that, after years of practice, Mary Jameson read at a rate of up to sixty-words-per-minute on an optophone, this rate depended tremendously on which reading optophone Jameson was using, when, and in what context. Furthermore, if we read the writing of operators such as Jameson and Harvey Lauer, we might find different measures of success altogether, including how the optophone allowed them to access not only texts previously legible only to sighted people but also materials beyond the conventional print categories that Fournier d’Albe used as a baseline or default. Examples of the latter include using an optophone to check if a pen is working or to read labels on packaged goods in order to relabel them in Braille.

Optophone Prototype with an RPi

MLab prototype of an optophone frame with an RPi and camera

If we prototype the conversion process itself, then we also get a sense of just how intricate early conversion and interpretation were, despite the fact that sighted people such as Fournier d’Albe suggested they were quite simple. In the MLab, I am currently using a combination of Python, OpenCV (open source computer vision), a Raspberry Pi, and—somewhat ironically—OCR technology to recreate the optophone’s conversion mechanism or “tracer.”

Image of the Tracer

Image of the “tracer” in a 1920 issue of Scientific American

Tracers would scan each line of type with continuous beams of light, producing an analogue stream of noise and silence corresponding with their movement across the page. But since OCR reads each line discretely, character by character, then it would be very difficult, if not impossible, to capture the same degree of fine-grained feedback and control that an operator likely had in the past. Put this way, prototyping points to what we cannot access instead of what is available at hand. Exposing the limitations of current technologies when remaking past ones also disputes the common assumption that innovation always marches in a teleological or straightforward path toward a new and improved future.

Optophone Code

Python script for an optophone prototype

From this perspective, prototyping may resist grand historical narratives by stressing the everyday work of incremental development as much in its consideration of the past as it does in present-day praxis. It also highlights differences between then and now and points to absences in the historical record that can be addressed only through conjecture, never entirely proved or disproved. In doing so, it foregrounds how media history is contingent and slippery, and manifests this elusiveness in material form. As a kind of inquiry, prototyping acts with materials even as it works against the grain of written history.


Post by Tiffany Chan, attached to the KitsForCulture and HASTAC project, with the news, physcomp, and fabrication tags. Featured image care of the MLab. Thanks to Robert Baker (Blind Veterans UK), Mara Mills (New York University), and Matthew Rubery (Queen Mary University of London) for their support and feedback on this research.

]]>
./optophone/feed/ 0
New MLab Piece on Remaking Optophones ./drc/ ./drc/#respond Thu, 24 Mar 2016 21:55:56 +0000 ./?p=6248 Victoria, Jentery, and I recently wrote a piece on the process of making an Optophone Kit, which is the third kit in the MLab’s Kits for Cultural History series. Titled “Remaking Optophones,” the piece was published in the Digital Rhetoric Collaborative (DRC) series on makerspaces and writing practices.

The reading optophone, which existed in multiple forms throughout the twentieth century, was an aid for the blind that converted text into sound. At the Lab, we are currently remaking one common configuration of the device, popularized by one of its first operators, Mary Jameson, that involved operators placing print materials on glass. They then used a handle to move a reading head located below the glass, sliding it back and forth to scan pages. As pages were scanned, the machine would express the type as a series of audible tones. To listen, operators plugged telephone receivers into the device and wore them over their ears, like headphones. Through hours of practice, operators learned to interpret these tones as corresponding letters or words.

Mary Jameson reading Anthony Trollope’s The Warden on an optophone, ca. 1921, care of Blind Veterans UK.

Mary Jameson reading Anthony Trollope’s The Warden on an optophone, ca. 1921, care of Blind Veterans UK.

Building off work by Mara Mills, Matthew Rubery and Heather Tilley, and Robert Baker (archivist at Blind Veterans UK), we hope to better understand how optophones changed over time by remaking one with particular attention to Jameson’s contributions to not only optophonics but also early optical character recognition (OCR). More on this process over at the DRC.

We would like to thank Jenae Cohn and the DRC team for their support and feedback on this piece.


Post by Tiffany Chan, attached to the KitsForCulture project, with the news and fabrication tags. Featured image care of the Digital Rhetoric Collaborative.

]]>
./drc/feed/ 0
Kit Published in the New Issue of Hyperrhiz ./hyperrhiz/ ./hyperrhiz/#respond Thu, 10 Dec 2015 01:26:08 +0000 ./?p=6196 Hyperrhiz, an online journal of new media criticism and net art, recently published the Maker Lab’s Early Wearables Kit in their 13th issue, “Kits, Plans, Schematics.”

The publication consists of five components: 1) an essay by Jentery that discusses the relation between the Kits for Cultural History and Fluxkits from the 1960s, ’70s, and ’80s; 2) a slideshow of posters by Victoria and the MLab team that document our process of making the Early Wearables Kit; 3) an “unboxing” video by Danielle and the MLab team that shows how someone might interact with the Wearables Kit; 4) a Github repository by the entire MLab team that contains the Kit’s core files and components (see our previous announcement for the repo); and 5) a brief “about” page describing the project.

Along with the recent exhibition of the Wearables Kit at Rutgers, the MLab’s appearance in Hyperrhiz demonstrates our multimodal approach to publishing Kits for Cultural History. We hope that this new issue of Hyperrhiz will inspire more, like-minded publishing projects across the arts and humanities.


Post by Tiffany Chan, attached to the KitsForCulture project, with the news, exhibits, and fabrication tags. Image care of the Maker Lab and Hyperrhiz.

]]>
./hyperrhiz/feed/ 0