In his book, The Tuning of the World, R. Murray Schaefer coined the term “soundscape” to describe the specific ambient conditions of a location. Like a landscape, soundscapes are an aspect of environment that remains relatively constant and which humans adapt to, interpret, and reshape. Schaefer interprets ambient noise as a part of the built environment. Take, for example, the way church bells were used to communicate time and call people to attention. The need to hear became a factor in medieval densification and growth required parishes to bud rather than sprawl. The information they conveyed was too important to do without. But, as we’ve needed bells less, they have shifted from signals into soundscape.
But, perhaps the newest tendency is not to tune ambient noise out, but to process it. Typically, this has been in the name of art, a way of humanizing it through aesthetic effects. Sometimes with simple analog acoustics, such as Lancaster’s Singing Ringing Tree, and other times through algorithms in Yokohama’s Tower of Winds. Both are interesting artworks, but they become still more interesting when seen as containing design choices in the form of the parameters that the artworks transform. The Tree might be more indiscriminate, but it’s aestheticization of wind is very transparent. The wind blows, you hear the howl and the tone at the same time, and one is explicitly the cause of the other.
The Tower of Winds, on the other hand, is controlled through algorithms that process the winds and noise it visualizes. It’s computerized and opaque, and you’d have to see the code to understand the process. I think it’s more beautiful, but it is also bound to the instruments that translate phenomena into machine-readable signals that are then interpreted by a program of more-or-less arbitrary signals. The process of getting from input to light relies, quite literally, on a black box. It is opaque.
So too is the ShotSpotter. It has assigned value to a particular range of frequency and timbre and understood them as probable gunshots. Software locates the source of the sound through trilateration and presents a monitor with a sound clip and a probable location cross-referenced to a GIS model. Further layers or readily processable information are included, hypothetically improving response time.
But, when legibility is so explicit, it also becomes possible to evade the mechanisms of gunfire by evading the identified sound of gunfire. An individual doesn’t have to suppress the sound, just distort it to the point that does not fit the parameters for a “gunshot.” At this point the precision of the system starts to fall apart – and it is the precision that gives police departments a time advantage over human call-in.
All this to say that how you pick parameters matters as much as how you manipulate their content. It’s hard to get criminals to agree to standards.