Week 14: Generating Holy Scripture

I’ve been thinking about meaningful datasets and what makes them so. I’ve also been thinking about this in the context of what might be more-or-less available to source. Sacred religious texts might meet both criteria. Faith is deeply personal to people and religious stories have been told since the beginning, yes?

According to these Pew Research reports from 2012 and 2017, most people in the world belong to a religious group. In order of magnitude, the largest groups are Christians, Muslims, Hindus, and Buddhists. I tasked myself with finding significant scriptures for each of their religions.

In some cases this meant learning what those are in the first place and quickly realizing that it’s not necessarily an easy answer. Language, stories, and texts evolve and develop differently over time and geographies in their expressions and interpretations. Which scriptures are read and the particular versions varies by denomination.

For training a machine learning model, I looked for documents translated into English. Any translation raises questions of accuracy and meanings that are lost or gained. Then again, these stories have been a part of the written and oral traditions for so long; are they not already the result of thousands of years of human telephone?

In addition, I sought to find documents as digital text (not scanned books), “complete” texts (as opposed to selections), and those without commentary and analysis (at least for now).

So yeah, considering all of these points, it got complicated real quick. And once I knew what I was looking for, it wasn’t necessarily easy to find. I have more questions now since I started this attempt. This project is much larger in scope than for the short time that I currently have. Let’s just say, in ITP spirit, that this is an earnest prototype.

Problematic as it may be for a number of reasons, not in the least because I’m sure it’s grossly incomplete, here’s a list of what I managed to find and where I found it. I welcome any and all comments and suggestions.

The King James Bible from Project Gutenberg

The Quran translated by Mohammed Marmaduke Pickthall from Islam101


The Tipitaka or The Pāli Canon texts of the Theravada tradition (a reference chart), all below from ReadingFaithfully.org.

Here’s a comparison of the included texts: Christian 25%, Islamic 5%, Hindu 19%, and Buddhist 51%.

I collected eleven documents total. Those that I sourced as ePubs I converted to PDFs using this online tool. Then, I used Adobe Acrobat to convert all PDFs into Rich Text Format (RTF) files. Next, I used TextEdit to convert those to plain text files (Format > Make Plain Text) although I could have used textutil for this (a comparison later on showed no difference in the output). In some cases, such as for the Bible, the Qur’an, and the Upanishads, I used Text Wrangler to remove the artificial line breaks in the middle of the line (Text > Remove Line Breaks). I’m not sure what compelled me to make these decisions—perhaps muscle memory from my previous charRNN tests? It was useful to deal with each file individually at first to remove document details about where it came from (e.g. all the Project Gutenberg info) and the translators’ introductions and such. But maybe I should leave in this info? Thinking about Caroline Sinders’ Feminist Data Set work here.

The documents, when compared to one another, show variation in line spacing: some are single-spaced, others doubled, while others contain a mix. In the end, I decided to leave it—this will likely impact the look of the output results.

In addition, during the file format conversion many diacritics did not convert well. And so continues the story of translation and interpretation…

Following my notes from my before, I used textutil to concatenate all files into one document titled input.txt: textutil -cat txt *.txt

In the end, my dataset totaled ~18MB.

As before when working with text, I decided to use the ml5js version of a Multi-layer Recurrent Neural Network (LSTM, RNN) in order to generate text at the character level. Many of my classmates have argued that this has been ineffective for them, but I was pleased with the results from my previous experiments so I’ll stick with it for now.

I also used Spell.run again because they provide access to clusters of GPUs for faster training than Paperspace. Nabil Hassein’s tutorial is an excellent resource for using Spell and training a LSTM model in the ml5js world. Here is a quick summary of my steps:

  1. On my local computer, mkdir scripture

  2. cd scripture

  3. virtualenv env

  4. source env/bin/activate

  5. git clone https://github.com/ml5js/training-lstm.git

  6. cd training-lstm/

  7. mkdir data

  8. mv input.txt into dir data

  9. adjust the hyperparamters via nano run.sh (which lives inside training-lstm). Using this as a reference, I applied these settings for my 18MB file:
    --rnn_size 1024 \
    --num_layers 3 \
    --seq_length 256 \
    --batch_size 128 \
    --num_epochs 50 \

  10. pip install spell

  11. spell login

  12. enter username & password

  13. spell upload data/input.txt

  14. provide a directory name to store my data input file on spell, I wrote: uploads/scripture

  15. request a machine type and initiate training! spell run -t V100x4 -m uploads/scripture:data “python train.py —-data_dir=./data”

  16. fetch the model into the scripture dir (cd .. out of training-lstm): spell cp runs/11/models (11 is my run #)


  1. I selected the machine type V100x4 at $12.24/hour

  2. Start time: 04:24:51am

  3. Finish time: 07:16:09am

  4. Total cost: $34.88

Coming soon!

Week 14: Chat Woids

Welcome to Chat Woids, a chat room that embodies the flow of conversation quite literally.

For this final assignment, I created an opportunity to consider Computational Typography in the context of multi-user interaction. Building off of my recent flocking sketch with the ha has, I incorporated a socket server to receive input from any number of connected clients. Text messages spawn at the center of the screen and then move around according to algorithmic rules modeled on animal behavior (as first laid out by Craig Reynolds here and later programmed by Daniel Shiffman as seen here).

Visualizing words as flocking/herding/schooling together matches my mental model of what happens when spoken words leave our lips: they stay with us and with each other in some capacity to inform the context of the present moment and our word choices. Over time, perhaps their initial weights diminish and they linger on the periphery.

I spent some time learning the computation of flocking behavior. The material agents, whether they be the original boids or my current woids, follow three rules: cohesion (find your neighbors within a given radius), separation (but don’t run into them), and alignment (move in the same direction as them). Playing with the global weights of each of these impacted how the woids in my chat room behaved. Too much separation and the woids avoided each other at all costs. Too much cohesion and they clustered together into a tight wad that greatly reduced legibility. In the end I adjusted the parameters to ensure balanced flocking tendencies. Isn’t it nice to picture a flowy trail of words emanating from a friendly chat?

The chat room was well-received during some brief playtesting on the floor. My guests enjoyed the disruption to the normal text/chat space, especially seeing prior words swing back into view. A useful suggestion was to incorporate some visual modification to the text to indicate when words entered into the stream. Otherwise it became increasingly (or even more) challenging to read and respond. I decided to reduce the font size of the woids over time to give priority to newer additions to the conversation and also to mirror my image of how they distance themselves yet continue to hover around conversation-goers.

My understanding is the this flocking code only accounts for acceleration and force (they equal each other here) but not the mass of the woids. I’d play with this in future iterations (and learn physics again!), perhaps assigning a mass to each agent according to the character count of the submitted strings. In addition, I’d play with assigning chat room users different weights for separation, alignment, cohesion, as well as their own parameters for acceleration and velocity. I’m curious how it would impact the conversation dynamics if some texts stayed aloof or others lagged behind.

More than anything, I love this idea that our words stay nearby but take on a life of their own after they leave us.

While the room is still deployed, chat here!


Week 13: More Moving Type

Immediately after class I spent some time with the referenced works. A few stayed with me over the week:

  • The flying letters and words from Jörg Piringer’s abcdefghijklmnopqrstuvwxyz and the opening titles of Barbarella

  • The clever way Ana Maria Uribe uses the shapes of letters as material for animated pictorial compositions/concrete poetry in her Anipoemas

I created these sketches in response:


The word, pop, is an onomatopoeia. It phonetically reminds us of the actual sound it describes. But not necessarily in how it is read. This sketch is an attempt to address and consider the idea that how we read something is also tied to its meaning. How successful is it for you? I’m curious about our instinct to connect and create relationships between anything in front of us—whether that is letters, words, a sequence of photographs, people, etc. The GIF here reduces the animation rate a bit, which I think works wells for this size. For a piece like this it’s worth playing attention to letter density and overall canvas size. Code


Playing with another onomatopoeia, ha ha. Shiffman’s flocking sketch was a quick way to test a mass of moving letters and/or words on the screen, and in this case, as part of simulated flocking behavior. The only user input is dragging the mouse to create more has. This particular form of interaction does not relate to nor enhance the meaning of the piece. However, I’d argue that laughter is contagious and comes in waves within a crowd and this reminds me of that. Code


Intrigued by how Oren converted glyph paths into polygons last week, I played around with his code and modified it such that the vertices of each letter disappear over time. The GIF above is a high speed version. Ideally this happens so slowly that you barely notice. At what point do you realize that it’s different? In general I’m curious about perception of incremental change over time. Code

Final Project Idea: To elaborate on movable type to create concrete poems from letterforms and/or words. I have this vague idea of creating objects from individuals letters, words, or the geometric forms of letter shapes to push around the page and break free of traditional left-to-right display along horizontal lines. But oh geez, how will this be different from a refrigerator of word magnets? Ideally, the method of interaction connects to the meaning of the work and does not remind people of refrigerators nor magnets.