On not fooling anyone

google-deepmind-artificial-intelligenceSo I got a phone call at work the other day.  I answered it and was greeted with a couple of seconds of silence before the person on the other end asked for some sort of manager– in charge of marketing, maybe?– that our specific store doesn’t have.  I explained that we’re corporately owned (this type of phone call, at least the initial part, happens more often than you’d think) and that not only did I not know the name of the person who was in charge of marketing, they likely weren’t called the “marketing manager,” and they were also almost certainly in Denver and not Indiana.

A brief moment of silence again.  “I’m sorry, I didn’t understand that,” the person on the other end says.  “Can you repeat your answer?”

Huh.

“I’m talking to a robot, aren’t I?” I said.

A brief moment of silence.

“No.  This is a living person,” the robot on the other end lies.  At which point I hung up on it and it didn’t call back.  I briefly regretted not asking it what the main ingredient was in tomato soup as a quick Turing test and then went on with my day.

Your question for today:  Even assuming that we live in a world where a voice-recognition computer program is the tool you want to use to determine who runs marketing for various local furniture stores, why would you a) program that computer to lie about being a computer when challenged, and if you choose to program the computer to lie, why would you program it to lie in such an unconvincing manner?  No actual living person would have answered like that.  Not one, anywhere.

Theories are welcome.