A Few More Thoughts On The Random Darknet Shopper

For those interested in following up what happened to the case I talked about in a previous post, here’s a little update from the artists that I probably should have tacked on to the original piece (it was getting loooong, so I cut the update):

“On the morning of January 12, the day after the three-month exhibition was closed, the public prosecutor’s office of St. Gallen [ed: Switzerland] seized and sealed our work. It seems, the purpose of the confiscation is to impede an endangerment of third parties through the drugs exhibited by destroying them. This is what we know at present. We believe that the confiscation is an unjustified intervention into freedom of art. We’d also like to thank Kunst Halle St. Gallen for their ongoing support and the wonderful collaboration. Furthermore, we are convinced, that it is an objective of art to shed light on the fringes of society and to pose fundamental contemporary questions.”[1]

The questions the artists say that their work poses (about autonomous information entities — “robots” — breaking the law) remain live, and the problematic legal status of artificial agents is in sharp relief. Swiss authorities apparently have not based their confiscation on charges against the bot or the artists — their grounds for sealing and confiscating the bot and the content of the exhibition seem to be about protecting third parties. The bot has not been arrested — it has been seized, because of course there is no existing legal convention that permits the arrest of an agent of this kind. It’s not clear yet whether or not the artists will be arrested; their main argument in favor of the legality of their exhibition rested on the claim that their work constitutes a form of art “in the public interest” and may therefore be exempt from certain forms of legal action.

There are some other questions that may now readily come to mind (and indeed have come to mind for a great many people who think about problems of this sort): If we grant that some governmental agency has an ethical right and/or obligation to police the behavior of its citizens, to what degree is the bot-as-agent subject to that policing? Could it be a “citizen”? If it has ethical and/or legal obligations, can it also have rights? What rights could it possibly have? It seems possible, after all, to hold a machine to be compliant or noncompliant with regulations (that’s what vehicle inspections, for example, already do) without necessarily according it either rights of obligations. It is tempting to put artificial entities in the mix with other species of natural entities on this score, but I find myself reluctant to do so, mostly because I find it difficult to conceive of them as relevantly similar on certain points.

I still maintain, however, that if we follow Floridi’s more environmentally-oriented approach, that even these questions (as wonderfully interesting as they are) may miss the point. We’re worried about the (automated) tree here when we haven’t even begun to grasp the (informational) forest.

[1] iMediengruppe Bitnik, Statement (1/15/15).



About L. M. Bernhardt

For a good long while (15 years or so), I taught philosophy at a little private university in northwest IA, and occasionally branched out into playing music, dabbling in photography, experimenting with food, and writing nonsense on my blog. The philosophy teaching part ended in 2017 (program elimination via prioritization), but never fear! I've just finished my MLIS at San Jose State University, and I'm currently on the market looking for new adventures in either philosophy or LIS. For now, I labor at a fairly interesting administrative job in order to support my dogs in the lavish manner to which they've become accustomed.
This entry was posted in Philosophical Mess-making, Philosophy of Information and tagged , , , . Bookmark the permalink.