Speech to text converters are coming into their own. But speech isn’t just words and sentences.
The use of emotion recognition might prove challenging as well, he added. Despite the claims that it improves love connections and speeds job interviews, consumers might bristle at the thought of being handled gingerly by a machine because they happen to have a note of frustration in their voices.
“The emotion-recognition aspect is being discussed widely,” Hegebarth said. “But there doesn’t seem to be a really reliable way of detecting emotional states fully, and some callers might not like it. They could find it intrusive.” |link|
So what do they find intrusive? From an informal survey I conducted a while ago, it seems at least a slim majority of people don’t mind the idea of giving up information to an artificial system per se, provided certain assurances that the information won’t cross human hands (cf Gmail, for instance).
In any case, I dont think there is the same reaction of intrustion is, for instance, a human speaker registers the emotion in your voice and reacts accordingly. In fact, I imagine that we expect the human to be able to handle my specific case when they are talking to me, emotions and all.
It seems to me that what is intrusive about a automated and mechanical response to human emotions is that it makes our emotional response itself seem mechanical and predictable. That my tone of anger doesn’t provoke a sympathetic response, but that it merely places me in the ‘anger’ category, to be dealt with in such and such a way.
In other words, if the machines become responsive to our emotions, then even our most emotional response can still be understood as the behavior of machines.