The Artists, The Bot, and the Darknet

One of the interesting challenges facing a grand vision like Luciano Floridi’s Philosophy of Information (PI) and Information Ethics (IE) at the current stage in the development of his larger project, at least as I see it, is the problem of application.[1] It’s all well and good to completely reimagine and argue for the philosophy of information as a distinct area of philosophy as such, but the really interesting test of whether or not it’s all worth it, I think, must lie in the consequences of taking the views for which he argues seriously and holding them to be true. I’m not just talking about the  logical or philosophical consequences of holding them to be true — I’m talking about the immediate, practical, life-relevant consequences of holding them to be true. This is a pretty obvious observation, honestly — and it’s one that Floridi himself, in his policy and identity consulting work for Google, for example, clearly knows all about and needs no help from me to consider.

What dam holds back the 'net?

What dam holds back the ‘net?

Of course, I have no desire to let the author have all the fun where working out the implications of his claims is concerned; what philosopher ever wanted to do that? [2]  In this post, I’m going to take a stab at a case using Floridi’s account of agency and the four [normative] ethical principles for IE that he lays out in The Ethics of Information. My very modest aim is to run this up the flagpole and see what happens.

[Really long Philosophical Mess-Making follows, with special reference to a bot with too much bitcoin on its virtual hands…]

I. The Artists, The Bot, and the Darknet

So: Once upon a time (that is, in the recent past), !Mediengruppe Bitnik created an interesting art piece: The Random Darknet Shopper. For this piece of work, they coded “an automated online shopping bot which [they provided] with a budget of $100 in Bitcoins per week. Once a week the bot goes on shopping spree in the deep web where it randomly choses and purchases one item and has it mailed to [the artists]”. [3] The artists then put the random purchases on display. The piece is a comment on and a way to explore the marketplace in which the bot conducts its random business, among other things.

It has gotten at least one especially interesting shopping result: 10 120mg MDMA (Ecstasy) pills, which means that the bot randomly used Bitcoin to engage in an illegal drug transaction. Note that the bot was not programmed to do so — in fact, it was not programmed to buy anything in particular, only to spend its budget of bitcoin on random items. It also bought cigarettes, fake Nikes, a fake Vuitton handbag, a credit card blank, and a Hungarian passport, among other things. [4]

This presents a difficulty. Can a bot commit a crime? Can it be punished for it? Are the artists legally responsible for its illegal behavior, and therefore subject to the expected sanctions against such behaviors? The usual language for thinking about these things involves discussions of intent, even when the legal infraction can be described as an accidental effect or the emergent behavior of the bot.[5] The law, in the United States at least, does not really have a ready way to describe the bot’s behavior, even as it can describe that of the bot’s programmers. The current legal assumption here is that intent can legitimately be ascribed to persons, but not to artificial systems like a bot. Guns don’t shoot people, after all — people do (says the snide voice in the back of my head reserved for rather poor jokes). If legally considerable agency requires the capacity for intent (and in the US, it appears to do so — I defer to experts on the law on this point, should they have a helpful correction to make to my understanding of the business), then the bot is not really an agent, legally speaking, and cannot be held legally responsible for buying drugs. Yet it’s not clear that the intent of its creators was to do something criminal, even though they could reasonably be assumed to know that this kind of transaction was likely to occur in the marketplace in which they released the Darknet Shopper.[6]

One may also want to ask whether or not the bot can be an agent in the moral sense — and if it can, then how might that understanding of its behavior in turn affect possible alterations in the law in order to keep up with developing information technologies?

That is the question that brings us back to Floridi, and I think that if we take him seriously, it is also the wrong question.

II. Agency and Entropy in the Infosphere

If we begin thinking about the Darknet Shopper case in terms of intent and agency, then Floridi can at least give us a fairly clear answer to whether or not the bot is a moral agent. It is, insofar as it satisfies the following definition: An agent (and we’re talking about agency as such here, not moral agency) at some given Level of Abstraction (LoA) [7] is “a system, situated within and a part of an environment, which initiates a transformation, produces an effect, or exerts power on [that environment] over time.” [8] The LoA proviso preceding the definition is important — an agent isn’t just any critter or object that affects some change in its environment. That would be a bit silly. The LoA at which “there is no difference between Alice and an earthquake” is perhaps not the right one for our purpose here. [9] According to Floridi, the appropriate LoA is the one at which being a candidate for agency is typically delimited by three criteria: interactivity (“the agent and the environment can act upon each other”), autonomy (“the agent is able to change its state without direct response to interaction”), and adaptability (“the agent’s interactions (can) change the transition rules by which it changes state”). [10]

Under those conditions, a case can be made for the Darknet Shopper being an agent at the LoA at which it interacts with vendors. I’m not entirely convinced that this particular bot is an agent, but for the sake of playing with the case a bit, I’m going to pretend that there is some LoA at which the criteria are satisfied.[11] What might make the Darknet Shopper a moral agent, then? Well, this is where it’s necessary to talk a bit about the four ethical principles for IE. Those four principles are:

0 entropy ought not to be caused in the infosphere (null law)

1 entropy ought to be prevented in the infosphere

2 entropy ought to be removed from the infosphere

3 the flourishing of informational entities as well as of the whole infosphere ought to be promoted by preserving, cultivating, and enriching their well-being. [12]

Notice that one of the salient features of this list (which, according to its author, presents its principles in increasing order of importance) is that it does not privilege human agents. Rather, the locus of moral value here is the infosphere as a whole, inclusive of all of the entities that both inhabit and constitute it — and human beings are only one of many kinds of informational entities.

An informational entity is a moral agent (relative to the value system accounted for by these principles) when it can engage in morally considerable action — that is, when qua agent at the appropriate LoA, its behavior can either “increase or decrease the degree of metaphysical entropy in the infosphere.” [13] If we are willing to grant the Darknet Shopper agency, then the answer to the question of whether or not it is a moral agent hangs on whether or not its behavior can alter the amount of metaphysical entropy in its environment and to what degree it may do so.

If our aim is always to reduce/avoid/correct entropy and to promote the flourishing of informational entities and of the infosphere as a whole, we have to ask ourselves what sort of response to a situation of this kind would accomplish that goal. I think, ultimately, the solution must be systemic rather than individual, insofar as the very existence of the Dark Net or the Deep Web signifies something problematic (possibly entropy-generating) in the existing system. Addressing the question of the bot’s moral agency is at best only a small part of the set of considerations relevant to dealing with the situation its behavior on the Dark Net represents.

Why do these exchanges exist? Because existing legal restrictions and other systems (of surveillance, of taxation and tariff, etc.) provide an incentive to certain users of ICTs to develop ways to evade those systems and restrictions. So, in a sense, the users, maintainers, creators, etc. of the Deep Web and the Darknet are doing something morally valuable, insofar as they are attempting to reduce certain kinds of informational/ontological friction in a way that promotes the flourishing of a freer, less entropic system. Yet that seems troubling — these are also the places in which identities can be bought and sold (see that passport? also: exploitative materials, etc.), which may be a net increaser of metaphysical entropy. Certain bits of informational friction are necessary for good system function. The question is, which ones? Once we grapple with the four principles, we come to see that whether or not the bot is responsible or accountable is beside the point — the crime of which it may or may not be guilty suggests a larger problem.


[1] The abbreviations are Floridi’s. They are also convenient, so I will continue to use them here.

[2] None of us. Ever. We are an officious bunch of argumentative pests by both inclination and training, so if this dude’s got arguments to work out, we’re so getting some of that action. That’s just how we roll.

[3] The Darknet Shopper (2014). For those unfamiliar with the concept of the Darknet and the marketplace in which the Darknet Shopper bot operates, see this and this for at least a beginning on what’s up here.

[4] Ibid.

[5] Take a look at Ryan Calo’s little piece for Forbes that takes this approach up.

[6] Again, see the Calo piece for Forbes — he lays all of this out better than I just did, and he’s got the legal chops to back it up.

[7] This is Floridi’s concept, pervasive in his recent work on the subject

[8] Luciano Floridi, The Ethics of Information (Oxford: Oxford University Press, 2013), 140.

[9] Floridi, The Ethics of Information, 140.

[10] Floridi, The Ethics of Information, 140-141; the parens around “can” in the adaptability description are in the original text.

[11] I think it’s certainly debatable whether this particular bot satisfies either the autonomy criterion or adaptability criterion, even at the most favorable LoA — I’d need to know more about how it was coded and how its interactions with the vendors from which it made purchases worked. Right now, in the absence of this information, I’ll entertain the possibility that it does satisfy them.

[12] Floridi, The Ethics of Information, 71. “Entropy” in this context refers to what he calls metaphysical rather than thermodynamic entropy, defined as “any kind of destruction or corruption of entities understood as informational objects…that is, any form of impoverishment of Being ” (67). If you want to catch up on all of these things relatively quickly (as I’ve no intention of walking through them), you can see the whole business laid out on one page (you’ll want to full-screen the video in order to read the itty bitty text).

[13] Floridi, The Ethics of Information, 147. I’m not going to rehearse Floridi’s arguments in favor if this position here.

Advertisements

About L. M. Bernhardt

For a good long while (15 years or so), I taught philosophy at a little private university in northwest IA, and occasionally branched out into playing music, dabbling in photography, experimenting with food, and writing nonsense on my blog. The philosophy teaching part ended in 2017 (program elimination via prioritization), but never fear! I've just finished my MLIS at San Jose State University, and I'm currently on the market looking for new adventures in either philosophy or LIS. Otherwise, I labor to support my dogs in the lavish manner to which they've become accustomed.
This entry was posted in Case Fodder, Philosophical Mess-making, Philosophy, Philosophy of Information and tagged , , , , , , , . Bookmark the permalink.