Melnick’s advice on my proto-proposal was that it seems I need to give the machines something like a self to be responsible, or to otherwise hold the seat of agency. My first philosophy class as a freshman at UCR was on Parfit and persons, and I haven’t thought about issues of ‘self’ since. I thought Melnick was a bit confused, because he raised this point in the context of talking about consciousness, and if talking about a self necessarily required talking about consciousness, then I was most definitely not interested in the self. In any case, raising issues about the self seemed to push me back into some self-moved mover mumbo that I was explicitly trying to avoid.
Flash forward to today, reading an article on Cognitive Radio:
Self-awareness refers to the unit’s ability to learn about itself and its relation to the radio networks it inhabits. Engineers can implement these functions through a computational model of the device and its environment that defines it as an individual entity (“Self”) that operates as a “Radio”; the model also defines a “User” about whom the system can learn.
A cognitive radio will be able to autonomously sense how its RF environment varies with position and time in terms of the power that it and other transmitters in the vicinity radiate. These data structures and related software will enable a cognitive radio device to discover and use surrounding networks to the best advantage while avoiding interference from other radios. In the not too distant future, cognitive radio technology will share the available spectrum optimally without instructions from a controlling network, which could eventually liberate the user from user contracts and fees.
If I can wax existential for a bit, the self necessarily understands itself in terms of the Other. In the human case, the other is derived from the Sartrean Look, which serves to situate both my being and the being of the Other. The Other, of course, is paradigmatically human, but for Sartre can be triggered by anything: a creaking floorboard can cause me to become aware of myself as an object in the Look of the Other.
But perhaps the Other need not be understood in terms of purely human phenomenal experience. The Other is, essentially, the category for “things that are agents that are not me”. “User” fits nicely into that category, but notice that it presents an entirely novel relationship: one in which the self is presented instrumentally, for the purpose of being used. One might try to argue that humans do not relate to the Other as User, except perhaps in certain pathological cases.
But I am more interested in the fact that the system itself understands itself not merely in terms of ‘relation to the user’, as nearly all conceptions of machine-as-tool would have us think. The user is one more aspect of the environment about which the system must learn.
I don’t know what to make of this, but I suspect there is something deep here that I need to think more about.