deltaflow: home
blog » K-CAP 2007: day 3

K-CAP 2007: day 3

Interactive Knowledge Externalization and Combination for SECI model

Yoshinori Hijikata from Osaka University in Japan talked about capturing both tacit and explicit knowledge from two people while they are engage in a conversation. Stages of the conversation are: socialization, externalization, internalization and combination. The GRAPE interface was used to allow the users to collaboratively build a classification tree as they speak. Four general discussion models were observed: (1) both users understand and agree with each other based upon their individual experiences, (2) one user doesn't have knowledge of the topic being discussed, but understands the other user's explanation, (3) one user doesn't understands the other user's explanation completely, but nevertheless modifies his own understand, because he trusts the other user's expertise, (4) both users disagree with each other, but one user reluctantly gives into the other user.

Human Computation

A Luis von Ahn talked about the CAPTCHA test he developed. The test is designed to protect a website from being misused by automated computer programs. A computer has trouble passing the test, but a human can pass it with ease. This has led to a whole new industry of "captcha sweat shops" where spam companies employ people in developing countries to solve captcha tests all day long, so they can sign up for free email accounts and use these to send out spam. In total about 200 million captchas are solves every day. Solving a captcha takes an average human 10 seconds. So, this amounts to a great deal of wasted distributed human processing power.

This led to the development of reCAPTCHA, a game that has all the advantages of a regular captcha, but also helps the OCR process of digitizing all the world's books. A scanned word from a book that a computer could not recognize accurately is offered up as a captcha for the human to interpret.

Luis von Ahn also developed the ESP game, where people have to assign keywords to an image with a partner with whom they can't communicate. If both people guess the same keyword, they win and the keyword gets assigned to that image. The keywording helps services like Google's image search to return more accurate search results.

The scary thing is how much information can be found out about a person just by monitoring them playing the game. After just 15 minutes of game-play, the researchers could predict a person's 5-year age bucket with 85% accuracy and gender with 95% accuracy (only a male would, for example, attempt to label a picture of Britney Spears as "hot"). This is just from a short time anonymously playing an online game, so, you imagine just how much information Google knows about you based upon what search for?

Some other new games being developed in Luis von Ahn's lab are: Squigl, a game where two players trace out an image; Verbosity, a game where people are asked to describe a secret word via a template of questions; Tag-A-Tune, a game to label sounds. All these games and more will soon be coming to the Games With A Purpose (GWAP) website.

Post your comment

Comments

No one has commented on this page yet.

RSS feed for comments on this page | RSS feed for all comments