You Can Read the Comments Section Again: The Faciloscope App and Automated Rhetorical Analysis

Overview

The Faciloscope (see Figure 1) is a web application that employs a support vector machine (svm) (Cortes and Vapnik, 1995) classifier to annotate high-value facilitation moves in online, informal learning and discussion environments. These moves include Staging, Inviting, and Evoking moves. The Faciloscope app and its supporting “About” information can be accessed at:

http://faciloscope.cal.msu.edu/facilitation/about/

Figure 1. Faciloscope Results Page

A brief history of the project, its aims and objectives

The Faciloscope has its origins as part of a larger project funded by the Institute for Museum and Library Services (IMLS) called The Facilitation Project. Based at the Writing in Digital Environments research center at Michigan State University (http://wide.cal.msu.edu/), the Facilitation Project was a research study designed to investigate facilitation styles and their outcomes in two distinct but representative museum environments. The first environment, Science Buzz at Science Museum of Minnesota, is a popular website known as an exemplary platform for exploring current science. The second environment is the more distributed use of social software at the North Carolina Museum of Life and Science (MLS). Instead of creating learning platforms that are hosted internally, MLS experimented with building learning communities where people gather on the web in spaces like Flickr, Twitter, YouTube, and Facebook. The Facilitation project had as its main aim to build on a prior research project called Take Two. In that study, the WIDE team identified facilitation in social media environments as likely very powerful and noted distinct styles in use at both museums. Facilitation is an analogue in informal learning spaces such as museums to “pedagogy” or “instruction” in formal learning spaces such as schools and classrooms.

The Facilitation Project team sought to refine and use replicable facilitation styles in order to identify and evaluate outcomes associated with those styles.

The Facilitation project was organized as a set of sequenced research projects, each of which played out in social media spaces and resulted in records of online participation. One example of such an event was a concept and practice developed at the MLS called “Experimonth.” Each Experimonth is a month long participatory inquiry and sharing experiment that brings scientists and citizens together to use data and observation to make meaning. For the Facilitation project, Experimonths were run at both institutions, called "Experimonth: Race" at the Museum of Life and Science and "Experimonth: Identity" at the Science Museum of Minnesota. In order to create spaces where participation and meaningful conversations could occur, a dedicated staff of facilitators was present in both museum sites to guide, prompt, and question participants about issues relating to race and identity. Consequently, each Experimonth produced transcripts, or threads, of online interaction that the team then analyzed using systematic coding (human applied descriptions using a calibrated rubric or coding guide). The team’s analysis of the threads was guided by four overarching categories: 1) Change 2) Learning Discourse Environment, 3) Facilitation and 4) Other. Within these categories existed multiple second level codes that helped narrow down the rhetorical purpose of each move. Consideration of these categories was twofold: first, the research team intended to identify moves that encapsulated the act of facilitating learning and second, to identify moves that might indicate learning by and among participants. In the transition between the pilot study to the work done during Experimonth: Race & Identity, the research team shifted attention away from merely identifying facilitation moves and focused more on identifying outcomes like learning.

The same coding scheme was used to analyze data collected from the final phase of the Facilitation project called FeederSketch. For this, the research team worked with Project Feederwatch, a yearly citizen science initiative that asks birdwatchers to observe and record data about the birds that visit their feeders. The goal in working with FeederWatch was to help improve observation accuracy and speed through the introduction of sketching as a learning tool. With this goal in mind, the team created FeederSketch, an eight­week long program that ran in tandem with the 2012-20­13 FeederWatch season. FeederSketch was implemented and facilitated on the FeederWatch forum and consisted of varying activities asking participants to draw as a part of their daily bird observations. More importantly, Sketch asked participants to share about their drawing experiences, which opened up conversations to moments of change and learning. With this final phase, the team was able to reaffirm the validity of the coding scheme developed at the beginning of the project and apply it to a completely new online environment with it’s own set of practices, participants, and goals. All of the project phases, taken together, provided evidence that any online environment can be a learning environment with the inclusion of dedicated facilitators equipped with pointed facilitation techniques.

As a result, the Facilitation Team produced two things that became the foundation for the Faciloscope application that we present here. First, the team described and validated important facilitation “moves” that, over time, produced desirable outcomes in the online facilitation sessions. In particular, the team found evidence that certain rhetorical moves more frequently led to moments of learning or change in participants such as invitations (asking questions) and constructing connections between people and ideas. These moves were precisely described for two audiences. For researchers, there is a coding scheme that allows for valid and reliable analysis of transcripts of facilitation sessions. And for facilitators and those learning to be online facilitators, there is a “facilitation cheat sheet” which describes the key moves, how to make them, and what kinds of desirable outcomes each can be used to produce. Both of these guides are published online in the team’s “Facilitation Toolbox” site (see http://facilitation.matrix.msu.edu/index.php/resources/).

Second, and perhaps most important for our purposes, the Facilitation team produced a very large data set of validated facilitation sessions. These were transcripts of online exchanges that had been evaluated by humans and judged to have gone well. We also could be sure that they contained a concentration of the target moves we would be training our machine-learning classifier to identify. These data provided both the training resources we needed as well as the means to evaluate the success of the application we would come to call the Faciloscope.

The idea for the Facilioscope arose when members of our Computational Rhetoric Group at WIDE heard a project update from the Facilitation team that included a presentation on the Facilitation Cheat Sheet. We wondered: could we make the Facilitation team’s work more replicable, especially by those who were working in museums as facilitators, by creating an application to assist them in reliably analyzing a facilitation session?

Review of related projects

The Faciloscope project takes as its theoretical and technological forebears Kaufer and Ishizaki’s Docuscope Project, hosted at Carnegie Mellon University (see Kaufer. D, & Ishizaki, S., 2016; Collins, et al., 2004; Ishizaki & Kaufer, 2011). As described on the project website, Docuscope is a text analysis platform designed to support rhetorical analysis of text corpora. Much like the Faciloscope project, Docuscope attempts to translate existing rhetorical theories of language use into an automated, visual environment. Unlike the Faciloscope, Docuscope relies on a dictionary of words, phrases, and other strings annotated for their rhetorical effects by human coders to conduct analysis. In other words, a text supplied for analysis is compared to existing Docuscope dictionaries of string patterns and their associated rhetorical effects. When correspondences are discovered, the input text is labeled with those rhetorical tags found in the Docuscope dictionaries.

Docuscope tags range from lexical features to complex concepts. For example, basic grammatical or syntactic features such as PersonPronoun (“her” or “him” or “she”) or Numbers (“eight” or “two”). More complex rhetorical features including Comparison targets words and phrases such as “more” or “eldest”, both of which imply distinction from another entity or quantity. Emotional valence is captured by tags such as Fear (“dangerous,” “afraid of”), Anger (“cruelty,” “an evil”), and Sad (“sorry,” “sorrow,” “tears”). Docuscope also annotates for action in tags such as Motions (“drawing”) and force in tags such as Positivity (“wonderful,” “my dears”). The frequency of these tags is returned to the user as well as their relative distribution within the text. In this way, Docuscope functions as a much more granular tool for rhetorical analysis than the Faciloscope. Docuscope accounts for a manifold of rhetorical features pre-coded by expert rhetoricians and grants users the ability to trace their aggregates throughout a text via color coding and frequency counts. In contrast, the Faciloscope collapses rhetorical features into three capacious categories of facilitation moves: Staging, Evoking, Inviting. While the Faciloscope employs supervised machine learning of hand-coded training data, similar to the hand-coding of Docuscope dictionaries, the minute lexical and rhetorical features originally tracked by human raters have been absorbed by the rules of the code book. That said, the Faciloscope and Docuscope do share similarities in the ways in which users engage with processed data. After processing, Docuscope allows people the ability to cross-reference Docuscope tags and their incidence in the natural language text. The Bands visualization used by the Faciloscope offers users the same utility to scrub through portions of the input text that has been color-coded by the app. Thus, Docuscope and the Faciloscope share a similar content model, predicated on retaining the integrity of the input text while providing assistive, global guidance to users.

Hosted at the University of Wisconsin-Madison, Ubiqu+lty (Vep.cs.wisc.edu, 2016) is a web version of Docuscope, and allows users to upload text files to run Docuscope’s dictionary analysis. After processing, Ubiqu+lty yields a downloadable csv file with frequency distributions of rhetorical and linguistic tags and an interacting HTML version that simulates the original Docuscope user interface. In addition to rhetorical annotations, the Ubiqu+lty web application allows users to customize output by uploading a “Blacklist” that allows users to strike ambiguous terms that may confuse tagging. Ubiqu+lty also allows users to customize the rules applied to their tagging request.

The ARG-tech group based at the University of Dundee has created a host of different software applications intended to automate and visualize the structures of arguments. One project is the Online Visualization of Arguments (OVA) tool (Arg.dundee.ac.uk, 2016). OVA allows users to diagram and semantically mark up arguments as a series of nodes and links in order to reveals formal organizational principles. OVA builds upon an earlier argument mapping platform, Araucaria, which allows users to mark-up arguments and export them in visual diagrams or structured XML representations (Reed & Rowe, 2004; see also Rowe et al. ,2006 ). In the work of the ARG-tech group, we see again an effort to use software to supplement the human analysis of rhetorical action by building new models of the text, whose features suggest alternative perspectives or means of validation. That said, the work of ARG-tech group, while invested in similar sites as the Faciloscope (namely, online human interactions), arises from a different theoretical tradition than the Faciloscope project. Tools such as OVA and Araucaria are informed by the argumentation studies and informal logic, which emphasize the conceptual workflows of argument or how constituent components are related to other components. On the other hand, the Faciloscope project takes a turn from a focus on argument/dialectic, strictly defined, to a more holistic view of rhetorical reasoning. Our aim is to understand what rhetorical strategies work in relatively stable social situations and what, if any, relationships can be detected between rhetorical patterns deployed and specific outcomes desired. As such, the Faciloscope is designed to return to user rhetorical information more closely akin to what Swales and Najjar (1987) term “rhetorical moves”--persuasive maneuvers which enact social action based on their engagement with recognizable genre conventions and behaviors as opposed to the abstract schemas created in argumentation studies and informal logic (see Harris & DiMarco, 2009 p. 47 for a similar observation).

The work of Randy Harris and Chrysanne DiMarco of the Artificial Intelligence Group at the University of Waterloo is also self-identified as computational rhetoric. A key project for comparison with the Faciloscope is Harris and DiMarco (2009) efforts to create an ontology of rhetorical figures (see also Kelly, et al., 2010 and DiMarco & Harris, 2011). This ontology is comprised of linguistic and stylistic markers of a text and an associated classical rhetorical figure. Harris and DiMarco present as an example their ontological category of InclusionScheme for the common figure of repetition or ploche. This InclusionScheme would feature a sub-class of Iteration, which in turn feature its own sub-class of IsA Iteration to account for a simple repetition of a word for emphasis (Harris & DiMarco, 2009, p. 49-50). The ontology of rhetorical figures project, thus, is similar to Docuscope in its efforts to label microfeatures of a text with a range of categories. It’s chief difference is its adherence to classical rhetorical tropologies.

The Augmented Criticism Lab, led by principal investigator Michael Ullyot, is pursuing work similar to Harris and DiMarco. The Rhetorical Schematics Project (see Augmented Criticism Lab, 2014) tracks rhetorical figures as they are deployed in early modern plays in English in order to identify how rhetorical figures are being used across multiple works and authors.

To put a finer point on the disciplinary situation of the Faciloscope, each of the projects described above attempt to translate the complex knowledge of human experts in rhetoric to a machinic process of assistive, not unilateral, analysis. One of the key differences among these project is the moment of algorithm engagement. OVA allows users to participate in the model building as they make connections between argumentative nodes. Docuscope, Ubiqu+lty, the Rhetorical Schematics Project, and the ontologies of figures by Harris and DiMarco (2009) extract granular features and use these features to inform higher level typologies. At its core, the Faciloscope does all of these things, but obliges users to engage with its analytical results farther down the pipeline. The facilitation moves of Staging, Inviting, and Evoking include the figural and stylistic features tracked by Docuscope; however, these figures have already been processed by human raters in their application of the project codebook to natural language data.

The project’s contributions to its disciplinary fields

We feel that the Faciloscope project contributes to the fields of rhetoric, professional writing, facilitation/informal learning, and the digital humanities in multiple ways. For the fields of rhetoric, professional writing, and facilitation, the Faciloscope demonstrates that higher-level rhetorical analysis can be automated. While these higher-level rhetorical analyses of facilitation moves may not approach the specificity and contextual awareness that traditional modes of close reading provides, they do offer instructive, global readings of texts that may be missed because a human reader must focus on the passage that he/she is reading. In this way, we also feel that the Faciloscope project functions as a theoretical and methodological bridge between the digital humanities and the of fields of rhetoric, professional writing, and facilitation and makes available for inspection the type of “algorithmic criticism” or “macro-analysis” argued for by Stephen Ramsay (2011) and Matthew Jockers (2013), respectively.

At the same time, the Faciloscope addresses a concern that Ridolfo (2015) raises about the narrow framing of rhetoric in digital humanities scholarship. Ridolfo’s (2015) own approach is to theorize the digitization of Samaritan cultural heritage texts with the critical lens of audience and the canon of delivery. For the Faciloscope project, we understand the operations of online discussion forum posts, and facilitation efforts more specifically, as an assemblage of recognizable rhetorical moves (see Swales and Najjar, 1987; Grabill and Pigg, 2012) made by participants in order to advance arguments and build relationships. In other words, the Faciloscope, as a digital humanities project, takes as its core unit of analysis and production tactics of persuasion and complexifies previous descriptions of rhetoric in the field of digital humanities as Ridolfi (2015) points outs in his analysis of the two fields. Citing Rockwell and MacTavish (2004), Ridolfo (2015) criticizes the definition of a rhetorical, multimedia artifact as:

. . . one designed to convince, delight, or instruct in the classical sense of rhetoric. It is not a work designed for administrative purposes or any collection of data in different media. Nor is it solely a technological artifact. This is to distinguish a multimedia work, which is a work of human expression, from those works that may combine media and reside on the computer, but are not designed by humans to communicate to humans.

While we feel that the output of the Faciloscope is intended to “instruct” and “delight” in the classical sense of rhetoric, we also feel that the app represents a means to collect and process data in order to aid the administration of online discussion environments. In this way, it extends to nonspecialists a rather sophisticated analytic capability: recognizing stable genre structures that arise from recurring social actions (Freedman & Medway, 1994; Miller, 1984; Schryer 1993; Miller, 1994; Miller & Shepherd, 2009). In doing so, we have observed that the Faciloscope is not simply a curious rhetorical artifact or a producer of rhetorical artifacts or an administrative tool; it extends the work of the digital humanities in general and of digital rhetoric in particular to informal learning practitioners in new and productive ways.

Methodology and timeline

The methodology employed for the creation of the Faciloscope is an interdisciplinary mix of humanities and machine learning techniques. The coding scheme or “codebook” guiding the human annotation of training and test data relies on qualitative coding methods often employed in rhetoric or professional writing research. The training and testing of the Faciloscope’s svm classifier relies upon benchmarking procedures used in supervised machine learning. Lastly, the development of the Faciloscope user interface is informed by the fields of user experience research and user centered design: iterative prototyping with feedback from users guiding design choices.

The Faciloscope’s coding scheme of facilitation moves revolves around the following decision scheme or “codebook” (Boetgger & Palmer, 2010):

General Instructions for Coders

  • Coders should code single sentences
  • Coders should code those sentences that they feel most confident about; otherwise, coders should skip the sentence
  • Coders should emphasize the content/rhetoric of a given sentence over the contextual associations that sentence may have with previous sentences in the corpus

Coding Scheme

The following coding scheme classifies sentence according to the following 3 rhetorical moves: staging, inviting, and building relationships.

Staging – a move that is aimed toward making a statement that introduces an idea, concept, or example in order to frame discussion or understanding.

  • WHAT and/or WHO topics (e.g., What happened? What is happening? What will happen? Who is responsible? Who is involved)
  • Stipulatory/declarative
    • Makes use of the existential there (e.g. “There is a website that covers these topics.”)
  • Denotative sentences that use verbs of existence (is, are, was, were, etc.) should be considered staging
  • Describes conditions of action such as deadlines, agents of approval, procedures, materials, rationales as well as outcomes
    • Lists of points or examples
    • Descriptions could include evaluations of conditions (e.g., ease or difficulty; fast or slow)
    • Outcome statements will usually employ constructions such as “because,” “consequently,” “as a result”
    • Staging sentences can also take the subjunctive mood as a way to introduce a topic or condition (e.g., “I would like to get this done today” or “I would like to go to this meeting.”); these will often communicate the rationale for and/or goals of the action.
  • Evaluations of topics without explicitly referenced agents are staging
  • Quoted and/or citational statements will always be considered staging when they stand alone; if couched within a question, then the statement would be an inviting move

Ex. “Looking through this conversation, I believe everyone who's contributing is also white” (Confessional, ln 786).

Inviting – a move that explicitly guides the development of discussion or an idea.

  • Stipulatory or deliberative
  • Requests for participant action, which can include elaboration on a topic or an explanation of a process
    • Can be framed in subjunctive mood (“I would like to invite you”
    • Imperatives should also be considered requests for action
    • Questions that seek to clarify an idea or gain information should be coded as Inviting
  • Solicitations of feedback that will inform decision-making
  • Redirects the focus of the conversation
  • Closings of discussions that call for future feedback
  • When sentence explicitly uses words such as “invite,” “ask,” or “request,” the sentence should be class as Inviting
  • Use of ellipsis (‘...’) at end of sentence is a sign of deliberation and should be marked as Inviting

Ex. 1 “How would you rephrase that question?” (Confessional, ln 449).

Ex. 2 “Amy, can you explain why you asked?” (Confessional, ln 966).

Evoking – a move that explicitly attempts to create connections among participants and/or maintain social relationships

  • Demonstrations of respect/acknowledgement of individual perspectives
    • Invoking a specific, concrete other (i.e., names; Twitter handle)
    • Explicitly establishing connections between specific others (agents; not objects)
  • Bids for understanding, agreement, or mollification, sympathy, empathy
    • Routinely features “you” as a direct or indirect object of predicate actions (e.g., “I wanted to give you . . .” or “I understand that you are in charge of bidding”)
    • Expresses gratitude or apologies for actions
    • Clausal level: Agent (subject) + Affective Expression + Agents (direct objects)
  • Offers affective motivation for the completion of an action of the building of relationships involving hortatory (e.g. “Let’s do it!”) or assurances (e.g., “I know you can do it.”) or accommodations (e.g., “We know you would like to be involved, so we are making every effort to include you in the process.”)
    • Affective motivation could also involve hedging or mollifying moves such as “We found your offer competitive and we are still interested in working with you”)
    • The use of exclamation points can convert a syntactically staging sentence into affective motivation (e.g. “with so much change happening all around us, it's time for north carolina to stop limiting the freedom to marry!”)
    • Words or phrases in all caps are equivalent to exclamation points

Ex. “That’s a beautiful story, Ro.” “Thanks for sharing it” (Confessional, ln 66-67).

  • Salutations that might open a conversation (e.g., “Dear James, I hope you are doing well.”)
  • Negative framings of relationships

As you can see from the General Instructions for Coders section of the codebook, the units for annotation are single sentences or words and phrases with a terminal punctuation mark. We have the sentence with a terminal punctuation as a our units for the Faciloscope because we sought to capture a small but semantically meaningful expression that could be easily tokenized by existing text processing packages such as the Natural Language Toolkit (Bird, 2006). This obviated the need to rely on less reliable chunking or parsing programs that would be need to tokenize natural language texts into T-units (often a single independent clause or a single independent clause and its dependent clauses; co-ordinate clauses may count as more than 1 T-unit) (Hunt ,1965; Young, 1995).

Two human raters applied the above codebook on legacy data from the Facilitation project and the “fresh” data scraped from online discussion forums. We then assessed inter-rater reliability of the human raters according to Cohen’s Kappa, achieving a Cohen’s Kappa of .95. Given that acceptable interrater reliability falls between .70 and .80 (Boettger & Palmer 2010, p. 348), we proceeded with the above codebook in our efforts to compile training and testing data for the svm classifier.

Once annotated by human raters, the data was then converted in a computational artifact amenable to machine learning. Each coded sentence was treated as an individual document and then processed to maximize salient features and remove insignificant features. We used the following pipeline of techniques (see also Figure 2):

  • Text converted to lowercase
  • Text tokenized according to words
  • Stopwords1 (function words, verbs of existence, pronouns) removed from text
  • Tokens lemmatized to feature dictionary roots of words where applicable
  • Term frequency inverse document frequency (TF-IDF) weighting applied to remaining tokens
  • A sparse term frequency vector is constructed to hold the counts of the weighted terms in the corpus

Figure 2. Faciloscope Text Processing Pipeline

Following convention .80 of the annotated corpus was demarcated as a training set. The remaining .20 of the annotated corpus was reserved for our testing and verification set. Because the move count differed substantially among the 3 classes, we used an unbalanced training and testing set for each class as follows:

Total Training Set - 15458 sentences

  • Stage Train - 9761 sentences
  • Evoke Train - 3673 sentences
  • Invite Train - 2024 sentences

Total Testing Set - 3866 sentences

  • Stage Test - 2441 sentences
  • Evoke Test - 919 sentences
  • Invite Test - 506 sentences

Because the Faciloscope classifies 3 categories (Staging, Inviting, and Evoking) we chose a One versus the Rest (or multiclass) svm classifier from the scikits-learn machine learning library.

We employed iterative and parallel development in the visual design and content model of the Faciloscope. Two team members created high fidelity wireframes as individuals. The entire development team then reviewed and selected the best aspects of these wireframes in order to fashion an intuitive user interface for the submission of natural language texts and the reading of results. The wireframe authors then transitioned to full-fledged HTML and CSS mock-ups of the Faciloscope. The member of the team in charge of programming and app development then integrated these designs and HTML drafts as templates. We include the wireframe drafts as examples of the Faciloscope’s design and web authoring methodologies as Figures 3 - 7 below.

Figure 3. Faciloscope Wireframe Base

Figure 4. Faciloscope Wireframe Radar Charts

Figure 5. Faciloscope Wireframe Bar Chart

Figure 6. Faciloscope Wireframe Line Chart

Figure 7. Faciloscope Wireframe Base View 2

Project Timeline

Most of the development work for the Faciloscope occurred between late May and early September 2014. What follows is a brief timeline:

May 2014

We combined the categories listed on the Facilitation Cheat Sheet, examining features associated with each, to arrive at a simplified rubric consisting of the three moves we would use to train our classifier.

May-July 2014

We performed human coding of the facilitation session transcripts, sentence by sentence, to build a sufficiently large corpus to use for training and test purposes. We used a dual-coding method, calibrating and conducting periodic inter-rater reliability checks to ensure the fidelity of our coding scheme.

June-August 2014

We designed visualizations for the Faciloscope’s analytic output based on typical questions we had heard facilitators ask about a typical session. Each element of the display was designed to be assistive in two basic scenarios: 1) a real-time analysis of an in-progress session (to answer the questions “how are things going?” And “what should I do next?” 2) a training session for facilitators, to help answer questions such as “what does a good session look like? How are the facilitation moves important in making a session successful?”

August 2014

We produced our first prototype of the system to begin soliciting feedback from the research team, including our museum partners. We held several conference calls where we conducted cognitive walkthroughs and gathered feedback to refine our data displays, primarily.

October 2014

We held another webinar to launch our private beta test period for the Faciloscope. We invited informal learning researchers as well as researchers we knew to be doing similar work in machine learning and computational rhetoric. We conducted a demo, received feedback, and announced opportunities for the group to access and use the app to help us further test it.

January 2015-Present

As the Facilitation Project came to an end, the app became a key “outcome” of the broader project and was featured in the final report and in several presentations made by Grabill and others on the project. But there was considerable interest in making the Faciloscope a public resource. To achieve this, we had to arrange for a more permanent hosting arrangement, study security issues to ensure that the app would be able to stay online with a minimal maintenance footprint if necessary, and finally to work on deploying the app in a stable, public environment. After another short test period starting in December 2015, we launched the current version of the Faciloscope in early 2016.

Comparison of the Project’s Expected and Current Outcomes

At the inception of the Faciloscope project, the team did not know if svm classification of 3 rhetorical moves on a sentence by sentence basis was possible. We were unsure if the classification algorithm would be able to detect the fine grained nuances people were employing in online discussion forums and were unsure if the human coders could achieve suitable inter-rater reliability (80+%) and amass enough annotated training data to feed an svm classifier obliged to tag 3 classes.

In past experiences using supervised machine learning classifier such as Naive Bayes and svm, team members had achieved encouraging results in terms of classifier precision, recall, and f1-scores (.78-.84). However, those results derived from the efforts of a single human rater and did not have the validation of inter-rater reliability metrics such as Cohen’s Kappa. We were hopeful that with the increased rigor in human rater methodologies and the expanded size of the training set we could offset the added complexity of classifying 3 classes.

As we moved through the Faciloscope project, we found that the pair of human raters were able to process a large quantity of sentences and maintain high inter-rater reliability. After an initial test of 3,000 coded sentences, we achieved a Cohen’s Kappa of .95. Upon testing 13,000 coded sentences, the human raters achieved a Cohen’s Kappa of .90. Thus, the project met initial hallmarks of consistently coding a large array of training data, which, by extension, affirmed the viability of our coding categories. After training the svm classifier and testing against 20% of the original coded corpus, the Faciloscope classifier returned a Cohen’s Kappa of .76 in comparison to the human raters. As a conservative estimate of inter-rater agreement, we were satisfied with the Cohen’s Kappa of .76, knowing that additional modifications to the text processing methods and the classifier settings may improve this result and the expansion of training data will improve this result with a chance at a relatively high .80 range.

In terms of the Faciloscope’s user interface, we are met our initial goal of providing users with a global view of natural language text data while, at the same time, allowing users multiple means to interact with and view the data. The Faciloscope yields distribution of facilitation moves found in an intuitive donut chart and a bar chart, replicating what the original research team would present to facilitators after hours of hand-coding. The tabular output of the Faciloscope provides users a granular and sequential view of the classification results, allowing users to re-read the natural language text in a linear fashion with the added benefit of the move tags. We are most pleased with the Bands visualization because it allows users to navigate the original natural language text guided by the presence of facilitation moves. For example, if a user notices a region of the text that is rich in Inviting moves, she can magnify that region and read the associated passage. At the current writing of this report, however, we have not had the opportunity to conduct user experience testing with our primary audience of informal learning facilitators and other online moderators, so these interface outcomes are provisional. It is unclear if the Faciloscope will enhance facilitator response time or analysis in live discussion threads.

Use of Best Practices and Standards

In terms of best practices for design, the Faciloscope project emphasizes User Centered Design (UCD) and iterative approaches throughout the development life cycle. From its inception, the Faciloscope was designed to aid online learning facilitators and the original Facilitation project researchers in their efforts to code daily streams of data. Choosing to automate the coding of facilitation moves, thus, was a response to the needs of the original Facilitation project stakeholders. To insure that we were addressing the concerns of these stakeholders, the Faciloscope development team held numerous interviews with key principals in order to see how our methods might improve facilitation and build new theory for scholarship. In fact, one of the original Facilitation project researchers was part of the development team. With her insider knowledge of both the original study’s design and the desires and habits of online facilitators, she guided many of design efforts, ensuring that we were returning useful and useable results for our target audiences (for further discussion of the model of UCD used for this project, see Ridolfo, Hart-Davidson, & McLeod, 2011) .

Intended Audience

We conceive of three audiences for the Faciloscope. The primary audience for the Faciloscope are the informal learning facilitators who were involved with the original Facilitation project research. These include representatives from the Science Museum of Minnesota and the North Carolina Museum of Life Sciences. A related audience is the group of investigators named on the Institute for Museum and Library Services (IMLS) grant that funded both the Facilitation project and Faciloscope development. These investigators include principal Jeff Grabill (Michigan State University), Beck Tench (formerly North Carolina Museum of Life Sciences, currently University of Washington), Troy Livingston (formerly North Carolina Museum of Life Sciences, currently CEO, The Thinkery, Austin, TX), and Kirsten Ellenbogen (formerly Science Museum of Minnesota, currently President and CEO, Great Lakes Science Center, Cleveland, OH).

The second audience are other online forum moderators who wish to promote discussion and would like a global view of these discussions. One such group that we have had contact with is SpeakUp NC, a group invested in elevating the level of online commentary on hot-button news items (see http://speakupnc.org/). SpeakUp NC gathers tools and techniques to improve online commenting on news sites, and the Faciloscope offers a means for SpeakUp NC to monitors the trajectory of conversations and make proper interventions. For example, a lot of Staging moves by commenters and a lack of Inviting moves may signal that participants are not engaging with each other but performing “cross talk”--issuing statements that inhibit dialogue.

The third audience are the practitioners of digital humanities, rhetoric, professional writing, and user experience. We view the Faciloscope as a project that unites the theories, methodologies, and methods of the aforementioned disciplines in a coherent, useful, and durable way.

Process for Selection of Material

Because the Faciloscope extends from a larger WIDE Facilitation project, we inherited a framework for research and legacy data from the original Facilitation project team. The Facilitation project is interest in studying how knowledge is made and shared in online, informal learning environments. The original Facilitation researchers partnered with the Science Museum of Minnesota and the North Carolina Museum of Life Sciences and gathered posts from online discussion forums centered around various topics of science, race, and culture. The Faciloscope team used these data to inform our 3 category coding scheme and also tagged these data as our initial training set for the svm classifier. When we exhausted the original data, we turned to the online, discussion forum BackyardChickens.com (http://www.backyardchickens.com/f/), a site in which users share information about chicken-rearing. We selected BackyardChickens.com for three primary reasons. First, the struction of the BackyardChickens.com site allowed for easy webscraping. Second, BackyardChickens.com offered a large volume of data through sustained threads--some lasting years. Third, the genre of conversations found in the BackyardChickens.com forums are similar to those discussions investigated by the original Facilitation researchers in that BackyardChickens.com drew experts and non-experts alike in the sharing of information while also engaging in community-building acts of thanking and well-wishing. We felt that the human coders would attain the same consistent coding decisions on the BackyardChickens.com data because of this similarity.

Current Project Status

As of this writing, the Faciloscope is functional and has been deployed on a live webserver. We would consider the web app to be in mid-level development. Additional documentation materials such as explanatory videos and a permanent URL need to be completed. Moreover, the functionality of some of the CSS and Javascripting elements needs to be reviewed through further user testing.

We also wish to add one more visualization to the Faciloscope output: a radar graph that would allow users to compare their inputted text output with an archived result (see Figure 2).

Figure 8. Proposed Radar Chart Visualization

The code for the radar visualization has been completed, but has been deactivated for the current release of the Faciloscope. The issue here is two-fold. First, we do not have archived data of facilitation results for which to establish a baseline for the radar graph. This baseline would need to be validated by online facilitators as typical of the genre or an example of “good” or “bad” discussion development. Only then would we have a baseline. Second, we do not have a means to guide appropriate comparison. For example, a user may not wish to compare the results from an Experimonth event on race with the results from a Feedersketch discussion on drawing birds. The topics might be considered too dissimilar to offer an instructive comparison.

In future versions of the Faciloscope, we would also like to expand its I/O options. The Faciloscope only accepts copy and pasted text and only outputs to HTML. We would like to offer users the ability to upload in the form of .txt, .doc, .csv, or .xlsx files. We also would like to render the HTML output as a pdf file for user download. The ability to generate reports of facilitation would increase the value of the Faciloscope because it can now be used as record-keeping tool.

As part of a future upgrade, we would like to add streaming input to a Faciloscope mirror site in order to capture the evolution of conversations in social media platforms such as Twitter in real-time. We envision users being able to add a hashtag to a Twitter search API and track tweets as they are harvested and classified. The bands visualization would be the central output. As the tweet data grows, users can see how facilitation moves are accumulating.

We also plan to implement a second stage of facilitation move coding that will incorporate more specific valences. For example, an Evoking move, under the current 3 class schema, would include a negative statement such as “What don’t you shutup!” And a more positive statement such as “Great job!” Because of the shared presence of the exclamation point and because both are evoking an affectual response. Adding valence to the coding scheme would allow the Faciloscope to differentiate between critical or affirming Evoking moves and allow users to get a better sense of the pathos of the discussion. Staging moves could be productively distinguished between direct and indirect descriptions. For example, distinguishing between a statement such as “According to the Science Museum of Minnesota, broody hens should be separated from each other” and a statement such as “I always put my broody hen in a separate coop” would enable users to track appeals to authority or external sources, and would perhaps indicate that someone is making an argument based on expert opinion or is making an argument based on personal experience. The facilitator could then see how the conversation has developed through those different modes and intervene.

Lastly, we wish to extend the Faciloscope’s administrative interface. Because the Faciloscope does not store user data or allow for alterations to its input and output options, a robust administrative interface for users has not been developed. The application currently features the default administration interface that ships with the Django framework. However, in keeping with the original facilitation project (see Grabill and Pigg, 2012; Sackey, Nguyen, & Grabill, 2015) we would like to offer facilitators the ability to stage informal learning events within the Fasciloscope app as opposed to other online discussion platforms such as Facebook or Wordpress. The addition of an online discussion environment will obviate the need for users to copy and paste or download transcripts from ancillary sites for analysis. Intead, users will be able to hold informal learning events or discussion and receive integrated facilitation analysis. We consider this innovation the “enterprise” version of the Faciloscope, which will additionally require extending hosting and user registration.

Sustainability and Preservation

The development team is also committed to maintaining the Faciloscope through its life cycle. The Faciloscope is currently being hosted on a webserver hosted by WIDE research at Michigan State University. Two committed system engineers are devoted to system maintenance. In addition, the designers of the Faciloscope have root access to the server and are able to perform updates to the app. The training data is stored on the WIDE server and it various cloud platforms including GoogleDrive and Github.

Previous Peer Review

We have benefited from two rounds of informal peer review. The first round of peer review was conducted by the Faciloscope development team and researchers from the original Facilitation project. The subject of this first review was the codebook and whether or not the three condensed codes for the Faciloscope adequately approximated the nine facilitation codes established in the original research (see http://facilitation.matrix.msu.edu/files/5713/8548/5077/Facilitation_tool_for_facilitators_v2.pdf). We also were interested in what types of facilitation moves held the most value for fostering information learning and/or online discussions. According to the reviewers from the Facilitation project research team, the news codes of Staging, Inviting, and Evoking were consistent with the nine codes previously established. Furthermore, reviewers identified Inviting-style moves as the most weighty in facilitated encounters.

The second round of peer review involved an online demonstration of the Faciloscope app via webinar. Invited guest reviewers included the original Facilitation project research team; representatives of the original museum facilitators, and experts in natural language processing and rhetoric from Carnegie Mellon and the University of Texas, Austin. The feedback during this round of review was sobering but helpful. Trained facilitators did wonder if the Faciloscope actually expedited their work or if incorporating another tool in their repertoire would slow their interventions. Concerns were also raised about the generalizability of the Faciloscope’s results given the training data. Because the Faciloscope relies on online discussion forum posts emphasizing the delivery of scientific or informal science information, reviewers wondered if its results could be extended usefully to other natural language text or its results were simply reliable only for online, scientific discussions. Another concern was raised about the viability of the test set because, though randomized, the test sentences were culled from similar types of forums or from multiple forums from a single site. We are sensitive to the concerns raised during the second round of review, and part of the Faciloscope’s development process does include additional testing on other data sources.

Project Citation Guidelines

We are recommending that users of the employ one of the following citations when publishing on the Faciloscope’s results:

Omizo, Ryan, Ian Clark, Minh-Tam Nguyen, & William Hart-Davidson. Faciloscope. East Lansing, MI: N.p., 2016. Web.

Omizo, Ryan, Ian Clark, Minh-Tam Nguyen, & William Hart-Davidson. (2016). Faciloscope. [Computer software]. East Lansing, MI. Retrieved from http://faciloscope.cal.msu.edu/facilitation/.

Intellectual property and copyright

We are applying a BSD 2 Clause license:

Copyright (c) 2016, Omizo, Ryan, Minh-Tam Nguyen, Ian Clark, & Bill Hart-Davidson

All rights reserved.

Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:

  1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
  1. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

As the Faciloscope is an open web application, the results that people use for scholarship or facilitation require the following citation:

Omizo, Ryan, Ian Clark, Minh-Tam Nguyen, & William Hart-Davidson. Faciloscope. East Lansing, MI: N.p., 2016. Web.

Interoperability

We have tested the Faciloscope in Firefox, Chrome, and Safari. We can report that there are no issues with browser interoperability in terms of HTML, CSS, JQuery, or the Django model-view-controller infrastructure. One key interoperability issue with the current code base in the scikits-learn package that drives the svm classification algorithms. The Faciloscope relies on scikits-learn version 0.14.1. Newer versions of the scikits-learn library will cause the application to fail. This error could be remedied with a revision of the Faciloscope wrapper functions; however, because scikits-learn version 0.14.1 is installed on the Faciloscope server, we see no immediate operational threat to the functioning of the app.

References

Arg.dundee.ac.uk. (2016). OVA | ARG-tech. [online] Available at: http://www.arg.dundee.ac.uk/index.php/ova/ [Accessed 31 Mar. 2016].

Augmented Criticism Lab. (2014). Rhetorical Schematics Project. [online] Available at: http://acriticismlab.org/rhetorical-schematics/ [Accessed 31 Mar. 2016].

Bird, S., 2006, July. NLTK: the natural language toolkit. In Proceedings of the COLING/ACL on Interactive presentation sessions (pp. 69-72). Association for Computational Linguistics.

Boettger, R.K. and Palmer, L.A., 2010. Quantitative content analysis: Its use in technical communication. Professional Communication, IEEE Transactions on, 53(4), pp.346-357.

Collins, J., Kaufer, D., Vlachos, P., Butler, B. and Ishizaki, S., 2004. Detecting collaborations in text comparing the authors' rhetorical language choices in the Federalist Papers. Computers and the Humanities, 38(1), pp.15-36.

Cortes, C. and Vapnik, V., 1995. Support-vector networks. Machine learning, 20(3), pp.273-297.

DiMarco, C. and Harris, R.A., 2011, August. The RhetFig Project: Computational Rhetorics and Models of Persuasion. In Computational Models of Natural Argument.

Freedman, A. and Medway, P., 1994. Locating genre studies: Antecedents and prospects. Genre and the new rhetoric, pp.1-20.

Grabill, J.T. and Pigg, S., 2012. Messy rhetoric: Identity performance as rhetorical agency in online public forums. Rhetoric Society Quarterly, 42(2), pp.99-119.

Harris, R. and DiMarco, C., 2009. Constructing a rhetorical figuration ontology. In Persuasive Technology and Digital Behaviour Intervention Symposium (pp. 47-52).

Hunt, K.W., 1965. Grammatical Structures Written at Three Grade Levels. NCTE Research Report No. 3.

Ishizaki, S. and Kaufer, D., 2011. Computer-aided rhetorical analysis. Applied Natural Language Processing and Content Analysis: Identification, Investigation, and Resolution, pp.276-296.

Jockers, M.L., 2013. Macroanalysis: Digital methods and literary history. University of Illinois Press.

Kaufer. D, & Ishizaki, S. (2016). DocuScope-Department of English - Carnegie Mellon University. [online] Cmu.edu. Available at: https://www.cmu.edu/hss/english/research/docuscope.html [Accessed 31 Mar. 2016].

Kelly, A.R., Abbott, N.A., Harris, R.A., DiMarco, C. and Cheriton, D.R., 2010, September. Toward an ontology of rhetorical figures. In Proceedings of the 28th ACM International Conference on Design of Communication (pp. 123-130). ACM.

Miller, C. R., 1984. Genre as social action. Quarterly Journal of Speech, 70(1984), pp. 151-167.

--- 1994. Rhetorical community: The cultural basis of genre. Genre and the new rhetoric, pp. 67-78.

Miller, C.R. and Shepherd, D., 2004. Blogging as social action: A genre analysis of the weblog. Into the blogosphere: Rhetoric, community, and culture of weblogs, 18(1), pp.1-24.

Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V. and Vanderplas, J., 2011. Scikit-learn: Machine learning in Python. The Journal of Machine Learning Research, 12, pp.2825-2830.

Ramsay, S., 2011. Reading machines: Toward an algorithmic criticism. University of Illinois Press.

Reed, C. and Rowe, G., 2004. Araucaria: Software for argument analysis, diagramming and representation. International Journal on Artificial Intelligence Tools, 13(04), pp.961-979.

Ridolfo, J., 2015. Digital Samaritans: Rhetorical Delivery and Engagement in the Digital Humanities. Digital Rhetoric Collaborative.

Ridolfo, J., Hart-Davidson, W., & McLeod, M., 2012. Archive 2.0: Imagining the Michigan State University Israelite Samaritan Scroll Collection as the Foundation for a Thriving Social Network. Community Informatics, 7.

Rockwell, Geoffrey, and Andrew Mactavish., 2004. “Multimedia.” A Companion to Digital Humanities. Eds. Susan Schreibman, Ray Siemens, and John Unsworth (pp. 108-120). Oxford: Blackwell, 2004.

Rowe, G., Macagno, F., Reed, C. and Walton, D., 2006. Araucaria as a tool for diagramming arguments in teaching and studying philosophy. Teaching Philosophy, 29(2), pp.111-124.

Sackey, D. J., Nguyen, M., & Grabill, J., 2015. Constructing learning spaces: What we can learn from studies of informal learning online. Computers and Composition, 35, pp. 112-124.

Schryer, C. F., 1993. Records as genre. Written communication, 10(2), pp. 200-234.

Swales, J., & Najjar, H., 1987. The writing of research article introductions. Written communication, 4(2), pp. 175-191.

Young, R., 1995. Conversational Styles in Language Proficiency Interviews. Language Learning 45/1:, pp. 3–42.

vep.cs.wisc.edu. (2016). Ubiqu+Ity. [online] Available at: http://vep.cs.wisc.edu/ubiq/ [Accessed 31 Mar. 2016]

 

  • 1. The stopword list used: stopwords = ['i', 'im', 'we', '...', 'also', 'mr', 'mrs', 'when', 'me', 'my', 'myself', 'our', 'ours', 'ourselves', 'Yours', 'yourself', 'yourselves', 'he', 'him', 'his', 'himself', 'she', 'her', 'hers', 'herself', 'it', 'its', 'itself', 'they', 'them', 'their', 'theirs', 'themselves', 'what', 'which', 'who', 'whom', 'this', 'these', 'those', 'am', 'is', 'are', 'was', 'were', 'be', 'been', 'being', 'have', 'has', 'had', 'having', 'doing', 'a', 'an', 'the', 'and', 'but', 'or', 'as', 'until', 'while', 'of', 'at', 'by', 'for', 'with', 'about', 'against', 'between', 'into', 'through', 'during', 'before', 'after', 'above', 'below', 'to', 'from', 'do', 'up', 'down', 'in', 'out', 'on', 'off', 'over', 'under', 'again', 'further', 'then', 'once', 'here', 'there', 'all', 'any', 'both', 'each', 'few', 'more', 'most', 'other', 'some', 'such', 'no', 'nor', 'not', 'only', 'own', 'same', 'so', 'than', 'too', 'very', 's', 't', "'", '"', 'just', 'don', 'now', "they're", "'re", "you're", "we're", 're', 've', "'ve", "'s", 'em', 'dy', "'ve", '.', ',', 'th', 'us', 'Wasnt', 'isnt', ')', '(', '..']

    Some of the items are designed to catch extraneous characters that might be missed in processing; however, the key aspect of our stopword list is that we retain certain words that would be screened out in other procedures, most notably the modal verbs such as “could” or “might” and the conditional “if.” We feel that modal verbs and if-conditionals are important lexical cues that denotes a limited range of rhetorical possibilities. Because of that, they are clear signals of language work, and, consequently, play a large role in meaning making.

Contribution

I was quite excited at the possibility of this digital analysis tool when I read this project title. I went to the Faciloscope App website (which isn’t working at the time of this writing, actually) and immediately put in data to try it out. It showed me two charts and a cut-off table and was hard to make sense of. I didn’t realize it at the time, but the tool isn’t actually made for my kind of data (I tried comment threads to news stories and Twitter posts), even though that wasn’t clear from the description on the site. Despite my enormous excitement about an autonomous way to code my data (anyone who has done qualitative data analysis knows how time intensive it is), the three rhetorical moves available didn’t help me understand my data very well.

When I got to the end of the project statement for the Faciloscope App, though, this inapplicability made more sense. Rhetorical moves are born out of the context of the data; in this case, online learning for science projects. Since that had nothing to do with my data sets, of course it didn’t transfer. This is the context for why I have mixed feelings about the App’s contribution to digital humanities. The App emerges out of a framework for administrators of online learning spaces with multiple users, if I understand the project statement correctly (the jargon and organization of the piece made it difficult for me to understand at times). Narrow tools are certainly useful, valuable, and appropriate. Yet the parameters could be more clearly defined on the website itself, as well as a better understanding of the audience for the tool and the project statement. So while the project sounds promising and its goals are lofty, I find its applicability a bit limiting.

Part of the drawback, for me, lies in the description of the project. The project statement is primarily descriptive, without much reference to existing scholarly conversations or persuasion for readers to become invested in online learning facilitation the way the project authors clearly are. A brief explanation or argument for why this work is important and how it relates to other work in the field (not just other software) would enhance my experience of the project. If online learning practitioners are to use it, does that make it an educator tool, rather than a researcher tool? How does it help an educator to see what percent of a data set is a particular kind of rhetorical move? Some of my questions were answered toward the end of the statement, and it was a bit late (I had to work out my confusion much earlier).

As a part of this discussion, I suggest that the authors give a better definition of facilitation, since it’s so integral to the project. Also, when the projects that were used to make the coding categories are described, is it possible to give examples or even show the data set (images, links, etc.)? And how does that link to “comments section” in the title?

In sum, overall, yes, I find this a worthwhile piece to publish and share with the dh community. Automated coding is interesting, useful, and a dream (if not yet realized for my own projects). I do recommend revising the project statement a bit before publishing to try to clarify some of these items, and I am happy to revise these comments accordingly.

Presentation

This is a digital tool, made to analyze digital data sets, so the online component is much more than a mode of delivery. There are some limitations with the design, such as the inability to download the results of the analysis and the cutoff table that I mention above, but the project statement says that the developers are still working on the App.

Preservation

The website being hosted on an MSU server seems stable. There are also faculty involved who seem invested long-term.

Contribution

Ryan Omizo, Minh-Tam Nguyen, Ian Clark, and William Hart-Davidson’s “You Can Read the Comments Section Again: The Faciloscope App and Automated Rhetorical Analysis” is an online application designed to support the computational and rhetorical analysis of what facilitation looks like in online discussion groups. Faciloscope is a tool to conduct rhetorical analysis of comments in online forums, and can help moderators understand when and how discussions take productive or unproductive turns. To accomplish this analysis, Omizo et al have created an application that allows facilitators access to a higher level view of the way discussion happens in online forums by coding and visualizing the rhetorical moves of “staging,” “invoking” and “inviting” in online comments. As an application, Faciloscope operationalizes and aggregates the rhetorical work that trained readers do, but also shows how rhetorical analysis may be built into digital applications. The authors write that their work has implications not only for rhetorical studies, but also “professional writing, facilitation/informal learning, and the digital humanities.”

What I find most impressive about this project is not that it has operationalized rhetorical principles per se, but that it has created computational rhetoric designed to assist a specific community practice of facilitating discourse in online discussion forums. Jim Brown and Annette Vee’s excellent January 2016 special issue of Computational Culture on rhetoric and computation stakes out the terms for rhetoric and computation and provocatively asks the question “What might rhetoric and computation illuminate when we view them together?” I see Omizo et al contributing one answer to that question by demonstrating how fieldwork and stakeholder communities may shape research and collaboration in computational rhetorics. Faciloscope emerges from collaboration with community museum partners and their desire to have an application to “assist them in reliably analyzing a facilitation session.” Faciloscope is a very early example of rhetoric software that’s specifically built and designed around community stakeholder needs. In their written description of the project, Omizo, Nguyen,Clark, and Hart-Davidson show a model for how this kind of engaged community work creates a research path for rhetorical studies and digital humanities that has considerable potential as a model for other projects.

The project joins a burgeoning suite of rhetorical experiments by Omizo and Hart-Davidson and the WIDE@MATRIX Computational Rhetorics Group (C|R|G), who recently published in Enculturation’s May 2016 issue an “app-ticle” (an article and an application) called the Hedge-O-Matic that does related support vector machine -based rhetorical work by tokenizing “raw text at the sentence level” and classifies sentences “as either a hedge or non-hedge.” Omizo, Nguyen,Clark, and Hart-Davidson note that the application then “outputs what Swales and Najjar (1987) term “rhetorical moves,”’ and Faciloscipe is an important case example for rhetoricians of how rhetorical concepts may be operationalized through software for practical and theoretical purposes.

Presentation

The site uses a simple and effective layout that requires JavaScript for the Google reCAPTCHA challenge. Beyond the reCAPTCHA, Faciloscope loads in Lynx and thus should work with screen reader software. All output data is returned as plain text and is combined with highly effective companion visualizations of “staging,” “invoking” and “inviting.” These rhetorical moves are also represented in the textual data, as well as an overview of particular tagged sentences. I immediately see how and why the tool would be useful for tailored community facilitation, and I also plan to use this tool in my own classroom not only as an example of how some elements of rhetorical analysis may be identified and operationalized, but also to talk about the exigency for their project. That said, it’s worth noting that Faciloscope is not intended to be a magic cure for online facilitation, nor is it intended to be. It’s a rhetorical tool, one that itself has the potential to help moderators manage conversations about facilitation. While that may sound meta to readers outside rhetorical studies, consider that one may not have prior knowledge of what effective facilitation looks like for a given online community. Faciloscope provides a kind of output that’s ideal for community and pedagogical discussion.

Preservation

The work has the institutional support of the Michigan State University College of Arts and Letters and Michigan State University Writing in Digital Environments Research Center @ MATRIX: Center for Digital Humanities and Social Sciences. WIDE@Matrix has its own sustainability plans to backup their DH projects, but in addition the institutional resources available, the authors also have multiple version controlled backups via GitHub and GoogleDrive. Their plan to maintain the functional project for its respective “life cycle” is realistic. Meanwhile, long term (100+ years) offline storage of all their research papers and code could be accomplished through analog media. From Twitter I know that Michigan State University College of Arts & Letters recently hired a DH/libraries specialist, so I expect those kinds of long term conversations about preservation and are happening.