deltaflow: home
blog » K-CAP 2007: day 2

K-CAP 2007: day 2

Maintaining Constraint-based Applications
Tomas Eric Nordlander talked about a brilliant constraints programming system for hospital inventory control. He defined Knowledge Acquisition as: "the transfer and transformation of problem-solving expertise from some knowledge source into a program. The process includes knowledge representation, refining, verification and testing". He goes on to define Knowledge Maintenance as: "including adding, updating, verifying, and validating the content; discarding outdated and redundant knowledge and detecting knowledge gaps that need to be filled. The process involves simulating the effect that any maintenance action might have". Knowledge Maintenance is extremely important, but frequently under-appreciated.

The author designed a system named proCAM for Cork University Hospital. This system replaced the hospital's previous manual logistics system. It had to answer three basic questions: what products to store? When should the inventory be replenished? How much should be ordered? To answer these questions, proCAM considered: historic demand, service level (risk of being out of stock), space constrains, time constraints, holding cost, ordering cost, current stock level, periodic review time, and more. These can be generalized into physical constraints, policy constraints, guidelines and suggestions (nice to order and store two products together that get used together).

proCAM used a combination of operational research algorithms and constraint programming (CSP) to do its magic. It is very easy to use. The users of proCAM only see two values on the display: the order level (the stock level at which a new order should be placed) and the order number (the amount of the product that should be ordered). Behind the scenes, the system takes all constraints and past history into account to calculate the ideal order amounts. It can even detect seasonal variations in stock usage patterns and adjust order amounts accordingly. If someone tries to order a product that violates one of the system's constraints, this violation is highlighted the user is given the option of: overriding the constraint and placing the order anyway, adjusting the constraint, or canceling the attempted order. Constraints can be maintained on-the-fly by hospital staff with this easy-to-use interface. proCAM also supports different sets of constraints between e.g. the day-shift and the night-shift staff of the hospital.

One could imagine the same system being adapted to almost any inventory control scenario.

Strategies for Lifelong Knowledge Extraction from the Web

Michele Banko (a student of Oren Etzioni's) taked about "Alice" system. Similar to TextRunner, Alice goes from a text corpus to extract facts, but also attempts to create logical theories (e.g citrus fruit = orange, kiwi). Alice adds generalized statements and embellishes class hierarchies. It allows lifelong learning. It does bottom-up, incremental acquisition of information. So, it will extract facts, discover the new concepts, generalize these facts and concepts and repeat this process indefinitely. The output is an updated domain theory.

Alice, when answering a query, does not use exhaustive search, because its data is never assumed to be perfect. Instead, it uses best-first search and search-by-analogy (association search) to navigate its knowledge tree.

Evaluation consisted of assessing the returned knowledge as: true, off-topic (true, but not interesting), vacuous, incomplete, erroneous. The system was 78% accurate. Problems occurred when the best-first search got distracted by going deep down a specific search branch.

Indexing ontologies with semantics-enhanced keywords
Madalina Croitoru, standing in for Bo Hu, talked about a system of adding keyword meta-data into ontologies for improved indexing.

She talked about the need to index ontologies for easier and faster search retrieval. Ontologies are different from text documents, so traditional text indexing can't be blindly applied. Ontologies are suppose to be conceptualizations of a domain, so the emphasis of this work was to take advantage of this aspect when indexing ontologies. Existing ontology indexing approaches use flat keyword indexes, human authored manual indexes or page-rank-like indexing techniques.

The author's semantic enhanced keyword approach works by unfolding all axioms in an ontology until all primitive concepts are extracted. These concept names are then weighed according to whether they are e.g. negated or not. Finally, because ontologies are conceptualizations of a domain, then it should be possible to take advantage of other people's conceptualizations of the same knowledge. So, the approach harvests wikipedia articles (and other articles link to from these articles) relevant to the ontology, and then uses latent semantic analysis to further tune the ontology keyword indexes.

Post your comment

Comments

No one has commented on this page yet.

RSS feed for comments on this page | RSS feed for all comments