Everyone’s favorite analyst Gene Munster can’t be accused of not doing due diligence. In order to see if Siri stands up to a manually entered Google search, he (or his staff) asked Siri 1600 questions — half on a busy street, half in a hotel room. In a note to clients, here are the results, courtesy of CNNMoney:
- Google understands 100% of the questions (not surprisingly, since they are keyed in)
- Google replies accurately 86% of the time
- Siri comprehends 83% of queries in noisy conditions, 89% in a quiet room
- Siri answers accurately 62% of the time on the street and 68% in a quiet room.
Munster pegs Siri as around two years behind Google, and wrote ”in order to become a viable mobile search alternative, Siri must match or surpass Google’s accuracy of B+ and move from a grade D to a B or higher.”
You can see some of the errors that Siri hit in the link above.
I do have some issues with the basic assumptions in this test. For one, Siri is still beta software. Now, I’m not entirely sure that Apple should have released beta software that widely anyway, but as they have, you can’t use Siri expecting it to work properly when you know it isn’t finished. Also, they compared it to manually written Google searches? Of course it’ll be less accurate — wouldn’t it be a far more relevant comparison to pit it against a voice-based search tool from Android? See how that stacks up?