XXX-X-XXXX-XXXX-X/XX/$XX.00 ©20XX IEEE
Hearing Program Behavior with TSAL
Joel C. Adams, PhD
Department of Computer Science
Calvin University
Grand Rapids, MI USA 49546
adams@calvin.edu
Mark C. Wissink
Department of Computer Science
Calvin University
Grand Rapids, MI USA 49546
mcw33@students.calvin.edu
Abstract—Much work has been done in the area of real-time
algorithm visualization, in which a program produces a graphical
representation of its behavior as it executes. In this lightning talk,
we examine the relatively uncharted territory of real-time
audialization, in which a program produces a sonic representation
of its behavior as it executes. Such work seems apt to be beneficial
for auditory learners, especially those with visual disabilities. To
support this exploration, we have created the Thread Safe Audio
Library (TSAL), a platform-independent, object-oriented C++
library that provides thread-safe classes for mixing, synthesizing,
and playing sounds. Using TSAL, we can create an audialization
by taking a working program and adding library calls that
generate behavior-representing sounds in real time. If a program
is multithreaded, each thread can play distinct sounds, allowing us
to hear the multithreaded behavior. This lightning talk provides
an overview of TSAL and demonstrates several audializations,
including the Producers-Consumers Problem, Parallel MergeSort,
and others.
Keywords—audialization, audio, concurrent, hearing,
multicore, multithreading, parallel, sound, synchronization, threads
I. INTRODUCTION
Much has been done to create visualizations for common
sequential algorithms (e.g., [7] provides a literature survey), and
some have worked on concurrent/parallel visualizations, such as
[1, 2, 4]. Visualizations provide a means of seeing program
behavior, and since many students are visual learners,
visualization makes sense as a starting point for sensory
depictions of program behavior.
However, not everyone is a visual learner. Some students are
auditory learners who learn best by hearing; others are tactile
learners who learn best by touch and manipulation, and so on. In
particular, visualizations offer limited benefits for students with
visual disabilities, which suggests we explore other sensory
means of representing a program’s behavior. In this work, we
focus on the sense of hearing and the use of sound.
When a typical laptop runs a compute-intensive
multithreaded program, the extra heat generated by the active
cores causes the laptop’s fan to start. The resulting white noise
provides a crude sonic indicator that the program is doing
something atypical. This is an unintentional sonic side effect of
the hardware engineering.
We instead propose to intentionally add sound-generating
calls to a program in a way that lets us hear its behavior. We
describe the resulting sonic effect as an audialization—a sonic
representation of the program’s behavior—similar to a
visualization, but for hearing instead of seeing.
We have been unable to find any published research related
to hearing program behavior. Some user-interface researchers
have published work on the use of ear-cons—the sonic
equivalent of icons—interface widgets that emit distinctive
sounds as one moves the mouse over them, for people with
visual impairments (e.g., [5]). CS education researchers have
also published work on using sound-file processing to motivate
novice programmers (e.g., [6]), but we have been unable to find
any published work on intentionally incorporating sound into a
program in order to audibly represent the program’s behavior.
However, there is a precedent for such work. In their
biography of Claude Shannon titled A Mind at Play, Jimmy Soni
and Rob Goodman relate the following story from when
Shannon visited Alan Turing in London in 1950:
“Even decades later, Shannon would recall one of Turing’s
inventions:
So I asked him what he was doing. And he said he was trying
to find a way to get better feedback from a computer so he
would know what was going on inside the computer. And he’d
invented this wonderful command. See in those days, they
were working with individual commands. And the idea was to
discover good commands.
And I said, what is the command? And he said, the
command is to put a pulse to the hooter, put a pulse to the
hooter. Now let me translate that. A hooter … in England is a
loudspeaker…
Now what good is this crazy command? Well the good of
this command is that if you’re in a loop, you can have this
command in that loop and every time it goes around the loop
it will put a pulse in and you will hear a frequency equal to
how long it takes to go around that loop. And then you can put
another one in some bigger loop and so on. And so you’ll hear
all of this coming on and you’ll hear this ‘boo boo boo boo
boo boo boo’ and his concept was that you’d soon learn to
listen to that and know when it got hung up in a loop or
something else or what it was doing all the time, which he’d
never been able to tell before.” [8]
Thus, years before the development of the compiler, debugger,
graphics, or any of the other modern programming conventions,
Turing had the idea of creating a sound-pulse machine
instruction he could use to hear a program executing. The result
would be a new sonic language that he could use to profile
correct programs and debug incorrect ones.
This idea seems to have been lost in the nearly 70 years since
Turing had it. We propose to revive this practice.
This work funded by NSF DUE #1822486.