How does it do it?
Greg Blonder, who we interviewed on the show about a month ago, posted on Internet Evolution today not asking how it does it, but noting how it doesn’t, and posing a work-a-round for poor predictive technology. I’ve chatted with Greg a few times, and he’s a great guy to talk to, but I don’t know him well enough to know exactly how much of an AI fan he is.
I am a fan of AI. I’m an AI nut. One of my big fantasies (given enough cash and computing cycles, one that I think is realistic), is to create a truly sentient (at least by Alan Turing‘s standards) AI.
Greg gives a couple examples of how current predictive technology falls short:
- Search engines, and their contextual ads: “Search engine companies believe that they can target ads more efficiently based on invading my privacy and analyzing my last hundred search queries and emails — and thus charge a premium for each ad served. But last week, while I was seeking information on car recalls, I was flooded by ads to buy the very same lemon from the same company I was investigating.”
- Piracy: “The Recording Industry Association of America (RIAA) snoops around our computers to see what music files we’re posting and trying to guess our intent. Do we own the track we posted, and are we just backing it up to the net?”
It’s hard to argue with his examples (and there are others in his list), but I’ll try. The bottom line that Greg is getting at is that invading privacy to learn more history is not going to assist a computer in accurately predicting the future. I think that frankly, the opposite is true.
Remember the 20 questions bot I mentioned in the opener? The trick to those AIs is for them first to narrow down the possible responses you could have to a narrow list of nouns, and then, narrow down the possibilities further with a refined tree of questions. Most modern 20 questions AIs can get the answer in less than 17 questions, but rarely more than 26.
What do you need to do when all you have is one question (a search query), and no retries? You need more context. You can either get that by coaching the user to be more specific, or you can use historical context.
Search engines, particularly Google, are going back not just a hundred queries, but years in their history, to determine context and intent. Google is also working to invade our privacy on a number of levels, and I’m not just talking about that silly street view thing they have on the maps system everyone seems to be up in arms about.
Look at Blogger, GMail, Search History, GTalk, Calendaring, and just about every tool that’s graduated from Labs into common usage. What’s a common thread? Not just organisation and assistance in utilisation of said data – archival! They default to archiving all text chats, give you nigh unlimited space to store email conversations, go back as far as they can in their history of your searches, and give you a free tool to record your thoughts on everything from the mundane to the profound in Blogger. Then they tie it to one nifty little Google Account that has your name and cookie attached to it.
They want to give you a gPhone and a Social Network too, not so that you can do better business with it (although that will be the selling point so that you’ll use it) – it’s to give better context and idea mapping so that when ads do get served up, it’ll know from that bulletin you posted about how much you hate your Honda POS, when you search that term, you aren’t necessarily looking to buy a new one.
Read some Kurzweil, if you don’t believe me. Even if you do – read some Kurzweil. Age of Spiritual Machines changed the way I think about the future. Kurzweil talks about how for a time, AI’s will be almost indistinguishable from unmodified humans in levels of performance and in some cases appearance. And then there will be a period where they excel in every way possible past the unmodified human, especially in matters relating to cognition.
All that having been said, the very things that are driving us toward that solution, that is the ability for Facebook and Google to sell us better, more targeted and predictive ads, are the very same factors that are driving us towards the solution to the problem that Greg proposes.
Greg thinks that we should have a ‘transparent internet‘ – that is an internet where actions have consequences. We are slouching ever towards a social internet – where we log in to an internet based operating system that is focused around our task list and our workgroups. Social networks imply responsibility, as actions are increasingly coming with consequences. It’s easier to dig up dirt on a person by looking through their photo albums, but it’s also easier to see where information has been forwarded from, as more and more information is moved around by the grease of social tools like Facebook, MySpace and Twitter.
I don’t think we’ll ever quite have the transparent internet Greg asks for, with modified SMTP and DNS protocols and security aware browsers. There’s just no margin in it for anyone. We will, though, see both more accurate predictions from computers as well as more accountability in our online actions due to social networking. Count on it.
Want to be part of the Rizzn–ite army? Indoctrination instructions here.