|

How should we review tools?

1 Leave a comment on paragraph 1 0 Some things to consider:

Quinn Dombrowski

3 Leave a comment on paragraph 3 0 Tool review should be taken up as an issue separate from directory creation and maintenance. Tool reviews should be a genre of writing published by journals or in similar venues (e.g. a tool blog run by an infrastructure-oriented organization like DARIAH). This has already happened, to some extent: for instance, the Journal of Interactive Technology and Pedagogy — dormant since 2018 — had a “tool tips” submission type. Tool directories could aggregate or link to these reviews, but managing the workflow for soliciting, reviewing, and publishing tool reviews is a different kind of labor from running a tool directory, and better served by a venue that already does similar labor.

4 Leave a comment on paragraph 4 0 That said, there is often kind of evaluation / review that goes into the descriptions of tools and/or structured metadata about them in a directory, so directories that provide anything more than a list of links can’t get away from this entirely. Probably the best thing they can do is have some clear guidelines for how they assign the metadata or write descriptions, and in those descriptions, be clear about some basic parameters of the context (e.g. classroom use vs. research projects).

Geoffrey Rockwell

5 Leave a comment on paragraph 5 0 There is a history of tool reviews in the digital humanities. In the first issue of Computing and the Humanities (CHum) David Lieberman reviewed (1966) the Manual for the Printing of Literary Texts and Concordances by Computer. The Manual reviewed was for the PRORA system developed by Glickman and Staalman at the University of Toronto (1966). In the same issue the “Prospect” mentioned plans to “publish a list of computer programs to solve humanistic problems.” (n. a. 1966, 2) The “Prospect” also called for people to send information and laid out what the editors felt was important. This list gives us a good sense of what was essential to know about in those mainframe days.

6 Leave a comment on paragraph 6 0 1. Name of program (if any)
2. Purpose of program (briefly and in non-technical language)
3. Type and format of input and any volume limitations
4. Type and format of output and any volume limitations
5. Programming language used
6. Required hardware
7. Running time
8. Availability of documentation
9. Maintenance plans
10. Name and address of person to correspond with
11. Any additional information available (e.g. publications describing program) (n. a. 1966, 2)

7 Leave a comment on paragraph 7 0 Some things stand out in this list. First, is the focus on input and output. This is before the days of interactive computing. Also important is the programming language and required hardware in order to get the tool to work at all. Tools in the 1960s didn’t come as executables for installation or as services available through a web site, but code that you would have to compile and run on your machine (if it was suitably configured.) By contrast today we typically worry more about the tool does and how it fits in your interpretative practice.

8 Leave a comment on paragraph 8 0 All this is to say that the way we read and review tools has changed over time as the way computing practices fit into our research has changed. If we think of tools as human artifacts with histories of production and consumption just like texts then we need to be sensitive to the shifts in how these artifacts have been used, read and reviewed. (Rockwell 2016) In a paper led by John Simpson titled The Rise and Fall Tool-Related Topics in CHum(2016) we looked at changes in tool discourse over time in what used to be one of the journals of record of the field.

9 Leave a comment on paragraph 9 0 In the TAPoR (tapor.ca) project we have tried to take the historical horizon seriously and document both historical tools and the history of tools. For example we have an entry for PRORA even if there is little chance of retrieving the code or running it. You will also see that we added comments with information about publications about PRORA including the review by Lieberman. Reviews, lists of tools, and tool directories are not outside the history of the digital humanities but woven through it.

10 Leave a comment on paragraph 10 0 References

11 Leave a comment on paragraph 11 0 n. a. (1966) “Prospect.” Computers and the Humanities. 1:1. p. 1-2.

12 Leave a comment on paragraph 12 0 Glickman, R. J. and G. J. Staalman (1966). Manual for the Printing of Literary Texts and Concordances by Computer. Toronto, University of Toronto Press.

13 Leave a comment on paragraph 13 0 Lieberman, D. (1966). “Review of Manual for the Printing of Literary Texts by Computer by Robert Jay Glickman and Gerrit Joseph Staalman.” Computers and the Humanities. 1:1. p. 12.

14 Leave a comment on paragraph 14 0 Rockwell, G. (2018) “Reading Text Analysis Tools.” The Pleasure of English Language and Literature: A Festschrift for Akiyuki Jimura. Eds. Ohno, Hideshi, Kazuho Mizuno and Osamu Imahayashi. Hiroshima: Keisuisha.

15 Leave a comment on paragraph 15 0 Simpson, J., Rockwell, G., Dyrbye, A., and R. Chartier. (2016) “The Rise and Fall of Tool-Related Topics in CHum.” Digital Studies / Le champ numérique. <http://doi.org/10.16995/dscn.29>

LISA SPIRO

16 Leave a comment on paragraph 16 0 How to review tools depends on the audience and the larger goals of the publication or website. To allow readers to compare software and get a quick sense of the tool, reviews could employ common criteria, such as the name of the tool, where to find it, what it does, and how well it performs these tasks. Researchers might want to know about cost, ease of use, flexibility, support, and the ability to get data in and out, while instructors might focus more on how suitable the tool is for classroom use. Other developers might be interested in how the tool was made and whether the code is open source, while tenure committees might care about the tool’s impact and quality. It would also be useful to contextualize the tool– what are some use cases, and how well does it meet them? As Geoffrey points out, what to include in a tool review partly depends on the current computing environment.

Page 1

Source: https://~^(?[\\w-]+\\.)?(?[\\w-]+)\\.hcommons-staging.org$/how-should-we-review-tools/