Will AI “Selves” Work One Day?

22:55 Thu 30 Jun 2011
[, , ]

This afternoon, a conversation at work centered on the fact that it’s possible to “teach” text analysis software with a corpus of a user’s instant messages such that when presented with a new message, the software can identify which of the user’s contacts sent that message—without any other data, just the body of the message. Which is interesting, but I was more interested in whether or not the software could learn what the user’s responses to the individual contacts were like, and from that point learn to effectively feign being the user. Essentially, whether one could successfully train a bot to conduct IM conversations in your stead.

So I was quite intrigued to see this post from JWZ tonight discussion more or less that same idea, although apparently without some of the learning aspects. Apparently the implementation isn’t too good, but it’s definitely an interesting concept, and I wonder if we’ll eventually get to the point where bots (or “smart agents”) handle this kind of thing for some significant number of people.

One Response to “Will AI “Selves” Work One Day?”

  1. jeffliveshere Says:

    I like the idea that this will probably give people one more level of “deniability”–that is, currently people sometimes say “I didn’t get that voicemail”, or “I didn’t see that IM,” or “that email must have gone to spam.” In the future, we might say something stupid and then later claim it was “just” our agent…

Leave a Reply