some people write drone software for the military to cope
Somebody said something on the internet and now I have to talk about it. My somebody for this post is Verified Twitter user and self-proclaimed leftist Emily Gorcenski. Gorcenski has been subjected to slanderous accusations that she has developed weapons for US military contractors, based solely on her repeated admissions that she has developed weapons for US military contractors. A particularly controversial point of contention is a series of tweets she wrote in October 2016 recounting “the most messed up thing” she’s ever done:
During my undergrad, I did a gov’t funded research project to use differential game theory to figure out how to make a drone chase a person. In retrospect, this was perhaps the most messed up thing I’ve done. Particularly the extension to drone swarms. The red flag should have been when I modeled my approach based on something called the ‘homicidal chauffeur problem.’
Advertising her unique qualifications in the intersection of computer engineering and state-sanctioned terrorism must have seemed like a harmless enough thing to do at the time. Impress potential employers who probably don’t know what differential game theory is, impress potential Twitter followers with your ability to “learn from your mistakes” and join “the resistance”: a win-win. But tastes change, and her strategy seems to have backfired. Now, she’s on the defensive:
Tankies mad that I had a research project assignment in an undergrad class to show how a consumer quadcopter could follow someone with a a [sic] camera…It was the size of a pancake, so clearly this oppresses the global south, or something.
When you want to start a revolution but your movement can’t defend against a 6” drone making 110 dB noise with a top speed of 12 mph…I guess there are no tennis rackets in communism
Gorcenski is trying to drive home, in the most patronizing voice possible, that she was not actually responsible for any use of technology by fascist governments toward mass murder and oppression, and that she could not have been, because the technology she developed with the sponsorship of a fascist government was 6 inches in diameter. I hope to make it plain why this argument is a non sequitur.
The emphasis on the diminutive nature of the drone in question appeals to the common conception of smallness as harmless. I think it would be a mistake to trust that conception in this case. A small device with a low power output compared to other tools in its class is only harmless if it fails to meet the power requirements to accomplish a harmful task. Pocket pistols have much less penetrative power than assault rifles, but if you shoot someone in the head with a pocket pistol, that person will die. Perhaps Gorcenski’s pancake is too weak to take off when outfitted with an assault rifle. That alone does not rule out strapping it to a small pistol, or a pipe bomb, or a taser, or a pepper spray canister.
Of course, causing harm to a target does not require the drone to actuate a violent weapon. I don’t think I have to go into great detail about why a fascist armed force might be interested in a camera-toting automaton that can be trained to chase any “suspect”. I mention this because Gorcenski also emphasizes that her drone was very loud and could be defeated with a tennis racket—the implication being that if you can hear the drone coming, you can simply run away or smack it out of the air before it hurts you. Let’s assume, as Gorcenski does, that the drone’s target is physically capable of hearing and running, is handily equipped with a bludgeon, and is immune to the PTSD that I imagine I’d acquire if the NYPD decided to suddenly besiege me with a flying blender: police chases are not stealth missions. Successfully outrunning a camera drone would be a short-lived victory if it’s already seen your face, or the logo on the back of your t-shirt, and uploaded it to a database for facial and clothing recognition (you can thank techies for that one too). As for the bludgeon, another thing I’d rather not go into great detail about is what happens when you destroy police property while its owners are trying to arrest you.
There’s a more fundamental reason that size doesn’t matter here. Anyone who writes software will eventually be familiar with the concept of scalability. Gorcenski’s project, as she describes it, was to write a software program to allow an automaton to follow a mobile entity within its field of view. Although the program was meant to be executed by a flying drone, the computer science that makes the program possible is relatively divorced from the mechanical engineering specifics of any one particular drone. In other words, although she wrote the program for a supposedly fragile screaming frisbee, it should be possible to scale the program up for larger, stealthier, and more rugged machines. This is because her project does not attempt to solve the problem of how the drone should move, but when and where it should choose to move—a problem not of its muscles, but rather its brain.
I think both Gorcenski’s initial description of her project and her later condescending defense of it represent a general failure of first-world software programmers to think critically about the ethics of their work. Programmers are capable of causing far more societal harm than they know, or pretend to know, even if they’re not as on-the-nose about it as teaching flying robots to chase human beings. A videogame programmer writing sophisticated algorithms to make virtual enemies chase after the player could have arrived at the same point that Gorcenski did. A web developer adding an image description feature to a social networking app, ostensibly to aid those who use screenreaders, can essentially outsource to users the labour of populating image recognition databases. Scandalously, NASDAQ-listed tech companies and the Department of Defense don’t seem to be motivated primarily by an appreciation of art, culture, or quality of life when they fund computer science R&D projects.
Conveniently for me, a software programmer who actually enjoys programming and would like to be able to sleep at night, I also think we are capable of creating social good. The same software that could enable American cops and other occupying forces to remotely chase civilians could instead be used to make a drone follow its disabled owner and perform tasks currently relegated to service animals. Similar things could be said of the code that prevents Boston Dynamics’ quadrupeds from being tipped over. Easier said than done, of course, when we know where the money is (writing course-correction software for military vehicles) and where it isn’t (making the world a better place). Emily Gorcenski knows those things too, I think. She may have pivoted from solemnly humblebragging about her redemption for the one time in college when she wrote murderbot software, to ridiculing everyone who dares to suggest that she’s ever developed anything more advanced than a high school robotics project, but I think she was telling the truth the first time. I have seen terrible, ignorant programmers go far, but I don’t think she could have gone as far as she has if she truly doesn’t know what a proof of concept is. I don’t think she genuinely believes that the US government would have given her money to write software for a hovering coffee can if it wasn’t hoping to appropriate that software for “peacekeeping missions”. In her words:
I wasn’t really naive, I was just another asshole kid raised by a conservative family…[It] took me a long time to wake up to the harm I was capable of causing. I’m glad I did, though.
She’s not unaware; she’s awake. God help us.