Modus Operandi is a modular program and media installtion developed for 2017 MFA thesis exhibition at SUNY Stony Brook
In Modus Operandi, the horizon line is utilized as an epistemological motif as a way to re-render images of power in the form of New York State Department of Transportation (DOT) infrared traffic security cameras. Using the concept of “bad” algorithms, or algorithms that work to break action, these real-time images of control are subverted by removing the cars observed, rendering them unusable and thus producing a form of countersurviellance. Using traditional landscape composition techniques, moving images are algorithmically re-rendered to present the viewer not with an image of the drivers below, but with an image of the machine eye itself, gazing upon an open sky.
The process uses a system of computer vision and other decision making algorithms to determine what to remove from each frame.
What is a good algorithm?
Computer vision, or the conversion of high dimensional data from the real world, such as an image or video, into quantitative data, relies on specified algorithms in order to computationally extract information from the world in massive scale quantities, also known as big data. A good computer vision performing algorithm, then, would be one that recognizes and classifies various components of an image in an accurate and efficient manner, in order to take some sort of action. With this technology, images have become, in their own sense, diagrams with representational meaning, signs optically removed from a signifier, visualizations of action, power, and control.
Traffic Cam as Image of Control
Red light security cameras, commonly known as traffic cams, are used in many cities with population over 1 million across the United States. As of 2016, they have been regionally banned in a number of states, but are in use broadly across New York City, and Nassau and Suffolk counties, NY. These cameras are theoretically installed in areas of high traffic (most often expressways and busy intersections) with a camera component mounted on the light post next to the traffic light, and a series of sensors, usually embedded in the road, that sequentially take measurements to determine the speed of a passing car. Theoretically, an image or video recording can be captured if the car is caught speeding or running a red light, and undergoes a series of automated assisted analyses in which computer vision is applied to the image in order to detect the car in question, the alpha-numerical license plate, and the face of the driver in the car, thus enabling evidence for prosecution of the rogue driver. There has been a great deal of controversy around the current usage of traffic cams to prevent road law-breaking, fueled by evidence that the use of cameras does not actually decrease accident rates and even seems to have given rise to the statistic of crashes resulting in injuries in some cases. Implementation of the cameras, predictably, increases the number of ticket fines issued and thus, the state generates revenue off of the proper issuing of the offence tickets. This model is a point of contention for the accused, who might feel that the public surveillance is a violation of their civil rights, and that the power is being abused for profit. This particular system, and the reaction that it has prompted from the divided public, is a poignant example of the how the speed of the mechanic image, images produced by technologically enhanced vision comes up against our inability to ethically make sense of them.
That leads back to the question, what is a good algorithm?
Aforementioned, it is one that accomplishes a certain task, efficiently and with a desired outcome. What than, is a bad algorithm? Could it be one that subverts or undermines the intention of the good algorithm? Could it be one that mathematically breaks down the system, resulting in unexpected consequences? I would like to take these two broad requirements, (1) subverts the original intention, and (2) improperly uses the system to its own unexpected results, and compile them into one definition as a “bad” or “imperfect algorithm”.
In regards to the scenario of the DOT traffic cameras, it could be inferred that if the computer vision algorithm enables massive surveillance for the purpose of enforcing road safety, the “bad” algorithm might act as a form of counter surveillance in that it would render the image unusable. The sneaky algorithm, already inherent with its own modus operandi, will be turned on itself in this case in order to produce the exact opposite of its original intention, a diagram of error, misinformation, an image that can not be parsed by a computer but must be understood qualitatively by human eyes. By restoring the human to the machinic image, I aim to restore sensitivity to the image of speed through a practice of attention and care.