deltaflow: home

Viewing entries tagged with 'computers'

Software architecture with Grady Booch

5 February 2007 | 62 Comments | Tags: , ,

I recently attended a round-table discussion with Grady Booch. Yes, the Grady Booch. What, you've never heard of him? If you studied Computer Science you are sure to have at least one book of his. He is one of the gurus of software development. He is now working as "chief scientist" for IBM.

Read his blog here and another blog of his here.

You can also watch his recent Turing Lecture on "the promise, the limits and the beauty of software". It is very interesting.

Here some tidbits from the discussion with him :

Functional programming languages (like LISP, Scheme and SML) failed largely because they made it very easy to do very difficult things, but it was too hard to do the easy things.

The current buzzword for revolutionizing the software industry is SOA: Service Oriented Architecture. Grady calls it "Snake Oil Oriented Architecture". It is just re-branded "Message Oriented Architecture". The idea is to expose services and describe them using WSDL. This decreases coupling between systems. The service becomes the thing to test things against. The rest of the software application becomes a black box. A meta-architecture emerges: no software is an island onto itself.

It is a good idea, but the hundreds of WS* standards are so complicated and ill-defined that Microsoft's and IBM's implementations end up being incompatible. Lesser companies have no hope of ever implementing these crazy so-called standards. Just another scheme by the big companies to lock people into their software.

Bill Higgins' REST-style of SOA is much more promising. It builds upon the idea of something like HTTP instead of the complex transfer protocols of the WS-Vertigo world.

But back to software architecture...

The next big challenge in software architecture is concurrency. Raw clock speed has just about reached its physical limit. Chip companies are now putting multiple copies of the same CPU onto a single chip. The result is that applications can no longer just be run faster. They have to be run in parallel in some way. For example:

Dreamworks computer animation uses 10,000 serves in a production pipeline to render movies like Shrek 3. They will soon switch to using multi-core processor, but will have trouble distributing the work-load to take advantage of all these multiple cores.

The game company EA has the same problem. the Playstation 3 uses the Cell processor which has an 8-core CPU. How does on take advantage of all these 8 cores? EA segments their games into simple concerns: graphics on one core, audio on another, AI on yet another, etc. But the company admits that they are using only about 10% of the processor's capacity. So much potential computing power is wasted because it is really difficult to parallelize something as complex as a video game.
A typical Google node (and there are many around the world) consists of about 100,000 servers, but Google have a relatively "easy" problem. Search is "easy" to parallelize.

The perfect architecture doesn't exist. Good architectures have evolved over time. The first version of Photoshop wasn't very good, but it has undergone many rebirths. Amazon's computer systems can handle the loss of an entire data-center without a shopper ever noticing. It certainly wasn't always that way, but by gradual refinement they have built (and are continuing to build) a better and better architecture.
A typical EA game costs about $15 million just in development cost (that is without the cost involved in licensing, marketing, or distributing). Two kids in a garage can no longer create amazing software. They can have a great idea, but it has to evolve into something much more complex to be truly useful (on that note: Google is a company most seriously in need of adult supervision; way too much money in the hands of kids. They will soon face a mid-life crisis just like IBM has in the past and Microsoft currently is right in the middle of - just look at the state of Windows Vista).

Some principles for a good architecture:

  • Crisp and resilient abstractions: use an object oriented view of the world, rather than algorithm based view of the world. Think about things instead of processes (this idea dates back to Plato).
  • Good separation of concerns: that is in one sense obvious, but is also really hard to get right. It is very tempting to put a bits of logic in the wrong places in the architecture.
  • Balanced distribution of responsibilities: no part of the system should dominate the entire architecture.
  • Simple systems: the holy grail; very few software companies get to this point. The best systems are ones that actually decrease their amount of code over time. Good developers find ways to do the same functions more efficiently.

How to tell a good architecture when you see one? Ask the following questions?

  • Do you have a software architect? (or, at most, 2 - 3 people sharing the role)
  • Do you have an incremental development process? (not waterfall, but releasing a new version every week or so)
  • Do you have a culture of patterns? (design patterns are beautiful and the best thing for creating good software)

If the answer to all three questions is "yes", then chances are you have a good architecture, or even if you do not have a good architecture at the moment, you will gradually evolve to having one.

4Plus1 Architecture

Want to learn about good architecture? A good place to start is the 4+1 model view of software architecture. Software needs to be envisioned from multiple different perspective simultaneously. Just like their can't be just one 2D diagram outlining the plan for a house, there can't be a single view of a software application. [I might add that there can't just be a single view of the Universe. The Vedic literature therefore describes the Universe from 4 different viewpoints simultaneously.]

As for Web 2.0: it is a meme, an idea, a flag pole that you can hang almost anything off.

As for the Semantic Web? Developers don't understand normal software architecture properly, so what chance is there for them to understand something as complicated as semantically aware software? So, in Grady's opinion, the semantic web is a long, long way off.

New email strategy

3 December 2005 | 7 Comments | Tags:

(Warning: this is going to be kind-of technical)

I??(TM)ve just changed how I deal with email. Here is what I did:

I opened two accounts using Google??(TM)s free Gmail service. I then set my university email account to forward all messages I get to one Gmail account. I then set that Gmail account to forward all messages it gets to the other Gmail account (but does not archive it). This second Gmail account applied various filters and labels to all incoming messages. Newsgroup messages are, for example, automatically archived upon receipt. That way they don??(TM)t clog my inbox, but I can search them quickly and easily using Google if I need to. I then also set all my private email account to forward to email to this second email account.

To check email I simply download from the second Gmail account using Microsoft Outlook and POP, or, if I??(TM)m traveling or am at University, just log into Gmail. When I??(TM)m in University, I download from the first Gmail account and set Gmail to archive the messages I have downloaded. That way my work machine only gets PhD related email.

The end result: All my email is backed up on one Gmail account, my University email and private email are separated, I have a nice email web interface I can use when on the road and both my home and University PCs get copies of all email relevant to me at those locations.

Now, all that was left to do was to create a backup of all my old email. After some trying I used to University??(TM)s SMTP server to and Mozilla Thunderbird (using an extension called Redirect, which has the ability to bounce email messages) to forward all my old email to myself in the newly setup Gmail accounts. I don??(TM)t think the University was too happy about my suddenly spamming their server with the 20000 (don??(TM)t ask) emails I??(TM)ve received in the past year and a half. Anyway, it??(TM)s for a good cause and they haven??(TM)t blocked me yet.

This all worked well. I now have all my old email stored in Gmail for instant access wherever I go.

Re-install, re-start, re-fresh

18 November 2005 | 1 Comments | Tags: ,

I reinstalled Windows XP on my computer over the weekend. My lowly Dell Inspiron 4150 laptop was showing its age. The operating system was clogged up lots and lots of old applications and orphaned data. It was time for a spring clean.

I deleted everything (after doing a backup) and started from scratch. It took three days to install all the many, many programs, utilities and applications I use. I must have downloaded gigabytes of updates and software. Windows itself is the worst culprit. The amount of patches and updates Microsoft has released in 3 years is mind-boggling.

As Gopala-Guru has remarked recently: something as complex as Windows needs constant tweaking and fixing by highly intelligent software engineers. Something even more complex, like the human body, supposedly came about completely by chance and involved no intelligent design whatsoever. Uh-hu ??¦

Yes, sometimes it is necessary to tear down the decrepit, old, moldy, rotten and highly unstable foundations and start over. A fresh new beginning to break free from past paradigms can work wonders. Free from debt and in a new attractive, city-center location ...

It certainly worked well for my computer. It runs so much faster now. Almost like new. The austerity of the re-install will help me get a few more months of life out of this machine.

IJCAI day 7

6 August 2005 | 0 Comments | Tags: , , ,

The conference is over. Over 300 papers were presented. Over 1000 people attended.

The last day started with an interesting keynote from a researcher from Sarcos Corp. Sarcos make robots. For example (in chronological order from the late '80s until today):

  • Utah artificial arm: a realistic looking replacement arm for amputees which picks up the small electrical current on the limb stump and uses it to control a motorized elbow and finger-grabbing action. Not as good as a real arm, but much better than no arm at all. Bionic man is coming.
  • Dexterous undersea arm with gravity compensation: a huge half-a-ton arm for undersea operation that can be remote controlled from an arm-glove-like device. It can, at one moment, pick up a raw egg without breaking the shell and the next pick up and throw a 150 kg anvil.
  • Disney theme park humanoid robots: fast moving robots that, for example, sword-fight each other.
  • Jurassic Park theme park dinosaurs: for example, a huge 40000 kg moving T-Rex for a water ride. They had to slow-down the movement of the robot, because it caused the entire building to shake.
  • Las Vegas Bellagio Robotic Fountains: 224 robotic super-high-pressure water shooters that can be programmed to quickly rotate in any direction while shooting water and thereby deliver artistic flowing water displays (price: $37000 each)
  • Micro surgery tube and video camera: a flexible robotic tube that can be feed through an artery from the hip all the way up into the brain to repair critical damage and stop internal bleeding
  • Exoskeleton XOS-1: an exoskeleton which makes carrying 100 kg on ones back feel like 5 kg and enables the wearer to otherwise move as he or should would normally. Developed for the US military for use in a variety of combat conditions.

Statistical Machine translation is getting good. Google has created a corpus of 200 millions words of multi-language-aligned training data and made it available for researchers. RAM is becoming cheap enough to make complex phrase based translation algorithms feasible. The results in translating the world's most spoken languages (in this order: Mandarin, English, Spanish, Hindi, Bengali, Arabic, Malay, Portuguese, Russian, Japanese, German, French) into each other are getting really good. Sentience structure is a bit weird, but the translations are fully understandable.

Oh yes, Peter Patel Schneider thinks that the semantic web is doomed. The RDF foundation on which the semantic web is based can't be extended to support First-Order Logic without introducing paradoxes. One could, for example, use FOL RDF to say: "this sentence is false". The RDF triple syntax is not expressive enough to prevent these kinds of illogical statements. Tragic, isn't it?

Now I'll have to spend some time detoxing from all the materialism I've been absorbed in for the past week.

(Update: check out some Sarcos Robot videos here)

IJCAI day 6

5 August 2005 | 2 Comments | Tags: , , , ,

Lots of stuff I didn't understand today at the IJCAI conference. I'll not talk too much about that. However, there were also some very interesting biology related results. Read on:

The day started off with a keynote by a neuroscientist talking about the brain and what underlying AI models it uses internally. His religious belief was that the only reason we have a brain is to drive our motor system.

Trees don't have brains, because they don't have to move. Sea squids have small brains to help them swim around the ocean. However, in their lifecycle they eventually attach themselves permanently to a rock. Once attached, the first thing they do is digest their own brain for food. No more movement = no more need for brain = yum.

He went through a whole load of clever experiments he conducted to determine how the human brain learns to do various tasks. It turns out that legs are optimized for efficiency and arms use special probability distributions for controlling their movement, minimizing noise error and optimum feedback for maximum smoothness and accuracy. Eyes are also accuracy optimized and share some of the same processing framework as the legs. Since the brain's processing power is limited, it reuses thinking circuitry wherever appropriate. Sounds like a very well designed robot to me.

Some researchers where (still) working on the traveling salesman problem. They found some minor optimizations. Ho-hum.

One guy used Apple's Keynote presentation software to give a presentation on "Temporal difference networks with history" (no, I didn't understand it either). However, his presentation looked so much more refined, smooth and professional than all the other previous Powerpoint presentations. I was shocked at how much better everything looked. If I ever get accepted to give a talk at one of these important international conferences I'll definitely get a Mac and present using Keynote.

An Israeli guy presented a clever partial database lookup algorithm variation on the A* algorithm. He developed a very quickly way to solve Tile puzzles, Rubik's Cubes and Top Spin games. He can now solve any 3x3x3 Rubik's Cube in 1/50th of a second where previous brute-force computing methods took 2 days.

An Australian researcher named J. P. Bekmann presented an "Improved Knowledge Acquisition System for High-Performance Heuristic Search". This system used an array of Ripple-Down Rules (RDR) to automatically wire-up a complex two-layer circuit array. He built the system in such a way that it would simulate building thousands of wiring connections thousands of times and use a genetic algorithm to "breed" the most effective rules. By the principles of natural selection only the most beneficial rules survive in the RDR-set and an optimal circuit layout is built.

However, it turns out that the system gets stuck quite quickly. It will run for some time improving itself, but then bottom-out very soon, running for thousands of generations without making any significant progress. A human expert has to intervene and introduce new rule-seeds. The genetic algorithm then either integrates these into the rule-set, or, if they are ineffective, they slowly let them "die-off".

It took the researchers and some circuit wiring experts a week of working interactively with the tool to produce an optimal rule-set of about 200 rules that could create a circuit design as good as one previously built by experts in half a year.

The result, while impressive from a computer science point-of-view, is also very interesting from a Krishna conscious point-of-view. Even simulated evolution only works on very simple problems. The evolving computer program requires periodic intelligent intervention as soon as a slightly more complex task needs to be solved. Complex new features can and do not appear by accident.

IJCAI day 5

4 August 2005 | 0 Comments | Tags: , , ,

Today was all about my field: description logics and ontologies.
Sony QRIO

Realization: most AI research is very simple, but the researchers disguise the triviality of their solutions by including loads of complicated looking math equations into their presentations (which no one, even if they are experts in the field, can hope to understand in the few seconds they appear on the slide) and using the technical jargon secret language so that those that don't know what the codewords translate into can't get a grasp on what is actually going on. Since I actual knew a bit about what the researchers presented today (I'm an insider in the DL-cult), I could see through some of the attempts to make research look more complicated than it actually was.

Franz Baader presented a paper on a polynomial time fragment of OWL-DL (SHOIN). His research focused on providing a description logic for use in medical ontologies which are often relatively simple, but quite large. His proposed logic EL+ includes conjunction, subsumption, nominals, GCIs, disjoint axioms, but does not include negation, inverse roles, disjunction, or number/cardinality restrictions.

Dmirty plans to build this quick algorithm into FaCT++ at some point. This would result in a kind of hybrid reasoner that is really fast for the easy stuff and can bring in the full power of the tableaux algorithm to solve the more difficult classification talks. Obviously the holy grail is to also link in first-order logic reasoning to be able to reason over almost any construct.

Speaking of Dmitry, he also presented a paper on various optimization in his FaCT++ reasoner. He has implemented a system of TODO list-like reorderable queues that allow the reasoner to dynamically order rule execution. Existential restrictions can be evaluated last and non-deterministic disjunction expansion can be ordered in an intelligent fashion. These reshuffling rules can also be varied depending on the type of ontology. GALEN, for example, requires very different rule-ordering to achieve maximum classification performance than other ontologies.

Heiner Stuckenschmidt talked about various means of mapping multiple ontologies together. His conclusion: use the e-connections technique invented by Jim Hendler's Mindswap research group in Maryland. It captures the more different connection semantics than any other methodology.

I learned yet more about bucket elimination, constrains processing and local search. Adnan Darwiche, a very fast talking Spanish/Mexican professor, gave the afternoon keynote address. I'll need some time to think about this.

Ian Horrocks gave the best talk of the day. He talked about a new decision procedure for SHOIQ that he and Ulrike Sattler came up with. What made his talk so good was that he didn't make it complicated. He explained the process of reasoning over expressive ontologies abstractly and intuitively. SHOIQ reasoning turned out to be more difficult than anyone would have thought. But now, finally, a reasoner can be built than can classify OWL-DL with all its feature, bells and whistles.

Thomas Bittner gave a (confusing) talk on parthood, componenthood and containment. He didn't really say much in 25 minutes. His conclusion: use his new, so called, L-language to expressive transitive role propagation and some other stuff in a kind-of pseudo first-order logic layer over OWL. Yet another rules language. Yawn.

Luciano Serafini introduced DRAGO (Distributed Reasoning Algorithm for a Galaxy of Ontologies). It is a peer-to-peer reasoning system for distributed ontologies. It uses PELLET as each node in the network of reasoners. He also described some (confusing) concept of "holes" that allow inconsistent ontologies to be ignored by the reasoner. It seems kind of obvious, but maybe there is more to it than that.

Sony QRIO finished off the day. Sony's much hyped world-touring robot prototype. Their very life-like 2-foot tall humanoid robot gave a number of demos. The QRIO could dance in techno and salsa-style. He could also speak (in Japanese), navigate an obstacle course full of stuffed animal toys, crawl under a table, climb a small set of stairs and kick a ball around. He did all this by using his stereo vision camera eyes (which could change color in a very cool locking effect) to evaluate his surroundings. He also did some facial recognition. Finally, he had the ability to respond to sounds, detecting and moving towards the clapping of hands behind him.

All this was running inside the cute, tiny robot on 3 separate 400 MHz RISC CPUs running what looked like RedHat Linux. The QRIO could operate fully autonomously, though the Sony engineers could also control him wirelessly from their laptops. Very impressive overall. Scarily human-like walking motion, gestures (and dancing). No doubt we'll be seeing a production model QRIO soon. Young and old kids around the world will so want one.

It took many hundreds of Sony engineers and many decades of worldwide academic research work in order to produce this robot that can just about mimic a few basic human/animal functions. And yet, scientists say that super-complex machine of the human body must have come about completely by chance. No intelligent design whatsoever was involved.

IJCAI day 4

2 August 2005 | 0 Comments | Tags: , , ,

Tutorials are over. No more play time. The actual conference started today.

It kicked off with a keynote from Alison Gopnik. Author of "Scientist in the Crib", a book about how young children learn and experiment very effectively. Grown ups can't generally come up with novel ways of approaching a problem, whereas children will do all kinds of crazy things when trying to figure something out. Playfulness is important. Interestingly, kids as young as 3 years olds can probabilistic causal inference almost as well as grown-ups. Most people are only good for producing things and management when they get older. The young are the innovators.

An interesting talk on temporal reasoning added temporal markup to a corpus of newspaper articles. Their system used a first-order logic reasoner (OTTER) to allow users to make free-text temporal queries on the data set. E.g. "who were the prime-ministers of France from 1962 - 1998?"

When it came time for question I asked how much the temporal reasoning slowed down their query processing. Their answer: while a normal search takes 0.1 seconds to answer, turning on temporal inference increases the query time to 4 - 10 minutes (depending on the number of transitive chains that need to be evaluated). Uh-huh. Next. First-order logic reasoning is too slow.

Carsten Lutz gave a survey of description logic work. Many ontology reasoning systems are EXPTime in the worst case, but do quite well in the average case. This makes them quite usable in practice. However, more tools and systems integration is now required.

I found out what "hypergraph decomposition" is. A research from Vienna was presenting a poster on the subject. Hypergraphs are graphs where each arc/edge can connect more than two nodes together. They are good at capturing several NP-complete problems graphically. An algorithm to perfectly decompose hypergraphs is, of course, unsolvable in the worst case. A graph with as little as 100 nodes can require days of processing to solve. However, a quick-and-dirty algorithm called "bucket elimination" does very well.

This conference is turning out to be quite useful. My body actually functioned reasonably well today, too.

IJCAI day 3

1 August 2005 | 0 Comments | Tags: , , ,

I was really sick this morning. I felt horrible. My body seemed to reject everything I ate the previous day. No wonder really considering what it was. I've come to the conclusion that people in the UK can't cook vegetables (except potatoes) without turning them into poison. Even potatoes taste like nothing. I'll eat only fruit and cereals for the remainer of the conference. It's not the healthiest diet, but the alternative is worse.

I'm not even going to talk about the people's consciousness when preparing the food. Simple technical cooking skill alone is bad enough to kill me.

Workshop day today. It was on "Intelligent Techniques for Web Personalization".

The title sounded interesting, but the presenters were not. Some truly awful presentations by that sent me straight to sleep. Some Indian researchers presented the most boring and uninnovative "research" I've ever seen, Slaves of the west. Some American guy gave another guy's presentation he knew nothing about. I couldn't understand a word he was saying. It made absolutely no sense whatsoever.

Some of the things I learnt:
- Never use more than three colors on a website. It looks horrible.
- People, in general, understand the concept of "menus" very well. Search isn't as intuitive for the average person.
- Link clicking-through is not an accurate measure of the usefulness of a web resource. However, adding a "time spent reading page" metric makes it quite accurate.
- Personalisation techniques will be quite important for mobile devices with limited screen real estate.
- Component critiques and custom deep-links are useful for cutting through a large search space to an area of interest. Fine-grained links are then necessary to zero-in on exactly what the user wants.
- Lots of work on personalizing search, but nothing to write home about. Ontology matters.
- Product recommendation systems are frequently attacked by companies wanting to boost their particular product's ratings. Amazon and CNet suffer heavily from this. Even a simple shilling attack will dramatically distort a product's rating. Something to be aware of.
- Using a domain ontology is useful in product recommendation. The system can improve the recommendation, provide the user with a compelling explanation of why, a product was recommend and even provide a certain degree of protection from shilling attacks by using an ontology. For example: "I see you like films with Tom Cruise, other people with the same gender as you who like Tom Cruise also liked romantic comedies with Mel Gibson. Here is one you haven't yet seen."
- Look at RuleML for automating reasoning about recommendations

IJCAI day 2

31 July 2005 | 2 Comments | Tags: , , ,

I somehow managed to read an hour of the Nectar of Devotion throughout the day. Wow. Amazing book. It gets better every time I read it. Reading it makes me ecstatic, even if I don't understand what it is talking about. The other conference attendees must have been wondering why I was grinning ear-to-ear while reading some book.

Today's tutorial was on "Principles of AI Problem Solving". Three professors talked the group through various "classic" AI methods for solving problems.

All problems in classic AI can be reduced to the satisfiability problem SAT. Some examples of common problems can be abstracted into moving a robot from a certain initial state to a certain goal state on a grid. Four variations are possible:

- Actions are predicable and we can see exact what happens.
- Actions are predicable, but none of the moves are observable.
- Actions' effects are random/probabilistic, but we can see what happens.
- Actions' effects are probabilistic and we can only get partial information about the events on the grid. This is the hardest problem.

The various techniques for solving these problems involve searching different types of graph models of the problem space. Graphs can be transformed in certain ways to improve the efficiency of the search. A simple transformation is, for example, to reorder the graph to start at the node with the smallest domain/branching factor.

One lecturer mentioned that a technique called "hypergraph decomposition" that can be used to break a graph into weighted, equi-sized pieces. The AI problem is thereby divided up and (hopefully) becomes solvable in logarithmic time instead of the usual exponential time necessary to solve NP-complete problems,

I might be able to use this decomposition technique to break up my ontology by using a reasoning dependency structure. That would help a lot. Very interesting. I'll investigate further.

IJCAI day 1

30 July 2005 | 0 Comments | Tags: , , ,

The conference started today. The first few days are workshops and tutorials. The actual conference comes later.

So, today I attend a tutorial on "Automated Reasoning in First-Order Logic". Professor Andrei Voronkov from the University of Manchester talked us through some of his Vampire theorem-prover (version 8). He's been working on this system for the past 10 years and it is by far the fastest 1st-order logic reasoning system in the world. It wins just about every category in a yearly theorem-proving competition. The competition is often as much as 100 times slower. Vampire devours other provers.

However, for specialized reasoning in OWL, dedicated tableaux algorithm-based reasoners are quicker than Vampire. For now. I learnt that there are many parameters by which Vampire can be tweaked. A small change in the parameters will often allow the prover to answer a problem that was taking 24 hours before in a couple of seconds. However, finding the optimal settings is very much a black-art. No one understand which combination of parameters will give a good result. Andrei himself is pretty good, but he doesn't have the time to investigate all possible things people want to do with Vampire.

Professor Voronkov is taking a sabbatical at Microsoft Research in Seattle for the next year. Microsoft wants to use Vampire to formally verify device drivers. Bad drivers are a frequent cause of Windows crashing, so Microsoft is very interested in translating the code into logic syntax and letting Vampire find the bugs. Intel has been verifying all their chips in a similar fashion after the embarrassing bug in the original Pentium processor that caused it to give the incorrect answer on a few simple division operations.

Relating to my research, I found out that Vampire, unlike the tableaux-based reasoners, doesn't have a problem when classifying large data structures. One major difference between description logic and FOL is that the later is undecidable. The prover can answer "don't know". A description logic reasoner will however always, in theory, be able answer conclusively. In practice it often answers "stack-overflow" when faced with the ontologies I throw at it.

Anyway, Vampire achieves it's slow memory usage by simply discarding inactive clauses it has generated by its resolution process. The most "young and heavy" clauses are going to be processed last anyway, so why not just throw them out? We're likely to find a solution (= empty clause) (= contradiction) before then. I wonder if I can do a similar thing in description logic. I'll loose some completeness, of course.

IJCAI day 0

29 July 2005 | 1 Comments | Tags: , , ,

IJCAI, the International Joint Conference on Artificial Intelligence, is probably the most important conference in the area of AI in the world (another important one is AAAI). This year's IJCAI is in Edinburgh, Scotland. The conference will be in Hyderabad, India next year.

My supervisor thought it was a good idea for me to attend this year's conference. Especially since it was so close to Manchester. He saves some money and I (hopefully) learn something.

So I traveled up to Scotland, wanted the streets of Edinburgh trying to find the place where I was supposed to stay (I forgot to take money with me for a taxi and couldn't find a cash machine), eventually found the Pollock Halls and collapsed in my room.

Edinburgh is a very old city with lots of history. Lots of ancient rock walls, rustic buildings and stone bridges. However, this backdrop does little to hide the usual vices of Kali-yuga. Scots seem a bit more brash than the usual Englishman. The homeless are more obvious, the drunks more visible, the prostitutes abound everywhere. So, altogether, a typical western city.

Tags for the masses, ontologies for developers

27 July 2005 | 40 Comments | Tags: ,

In my line of research I’m very much involved with ontology development. I’m not going to beat around the bush: developing ontologies is hard. Really hard. The more logically rigorous they get, the more difficult they become to construct.

So, you might ask, how is the vision of the great and wonderful “semantic web” ever going to work? After all, ontologies are the framework that is meant to undergrid the Internet of tomorrow.

Take a look at del.icio.us, flickr.com and technorati. They all use an up-and-coming (craze of the moment) idea of tagging. You allow people to add any word to their content and collect all these tags up into a large list. The larger the font, the more frequently used the tag. The obvious problems are synoyms and homonyms. However: who cares?! It kind of works, anyone can understand the idea, so wa-hey: let’s go tag crazy.

Ontologies however are much more powerful and dangerous. They exactly and unambiously define terms and formally capture relationships between terms. You get transitivity, inheritance and other great stuff like that. Moreover, computers can automatically navigate these data structures and use them to answer almost any question you can you throw at them. Feel the power!

What to do? The general populus is never going to be able to author ontologies, but could possibility be induced to use them, given a simple enough interface. So, if the subject area we are describing is sufficiently limited that we can construct an ontology to cover it (no one is going to be able to create an ontology of “everything””), then we can allow people to tag their content with our ontologies terms. The result: we can have our computers sort, manage, slice and dice their tagged content any which way, take advantage of all the advanced features and the world is a better place. Amen.

 

Every leader should have a blog

23 July 2005 | 0 Comments | Tags: ,

I was listening to an excellent talk with Jonathan Schwartz, president and COO of Sun Microsystems. One of the many interesting things Jonathan said was that a blog is a great tool for leaders. Ever leader should have one. He uses his blog to communicate his ideas to his employees. They can also directly interact with him by posting comments and talkbacks. It effectively cuts through the corporate hierarchy and puts allows him, as a leader, to directly lead a large number of people. The result: massively decentralized decision making and management!

The alternative is to going through the usual management structure, down the multi-level corporate hierarchy. A process that is both slow and prone to Chinese whispers.

As to the danger of putting his corporate strategy up on the net for everyone, including competing companies, to read: "The competition's employees also read it and if they like what I'm saying better than what their boss is saying, they'll join Sun".

Vlogs

18 July 2005 | 0 Comments | Tags:

Wired magazine has an article about Vlogging, or video blogging, or video world wide web logging (to expand the shorthand complete). Short 3-5 minute videos of Vedic philosophy delivered in a fun way by devotees with interesting personalities have so much potential to become really popular.

The most popular of the vlogs is Rocketboom. We can do at least as good as they, don't you think? Let alone the other (terrible) vlogs out there.

Blogs: use both RSS and email

11 July 2005 | 0 Comments | Tags:

Here is an interesting take on the RSS phenomenon. The author's basic point: RSS is too complicated for most people, so provide them with an email delivery mechanism in addition to RSS.

A "top-10 blog postings of the week" type email newsletter could, for example, reach a much wider audience than a difficult to subscribe to RSS feed.

Podcasting (Adam Curry)

9 July 2005 | 3 Comments | Tags: ,

I was listening to Adam Curry's podcast "the daily source code" today. Adam Curry is a former MTV host who has a daily podcast about other podcasts (and other useless stuff that is on his mind) and is considered to be one of the pioneers of podcasting.

He has an interesting take on the difference between blogs and podcasts:

Blogs come from an inherent desire of people to publish. Podcasts are the opposite, people create them because they are dissatisfied with what they what is available in the media (Radio/TV). In that way they are similar to the iPod (which is presumably why Apple has built podcast support into iTunes 4.9, the companion software to the iPod). The listening experience is no longer tied to what someone else thinks we will like. Instead everyone can listen to what they want to listen to when they want to listen to it. Freedom! Liberation! True happiness! (snicker)

"I want to tell people about something I'm interested in" vs. "I don't like what on the radio, so I'll create my own"

There was no real market for the former (blogs) when they first manifested, but the later (podcasting) is likely to have a much faster uptake, since people are actively looking for something like it. Mundane sound is just so dull and lifeless. All it needs is to become easy enough for the average-Joe to "tune in" and we'll have a listening revolution on our hands.

For those interested in creating their own podcasts: Adam and his friends are developing some podcast creation software.

What is the semantic web?

4 July 2005 | 1 Comments | Tags:

I found this very good article about the semantic web. In short, the semantic web is:

"A giant, distributed, machine-readable database that allows computerized intelligent agents to process knowledge and invoke web services, while also enabling better annotations, browsing and search for humans."

It's the future of the Internet (if it ever works).

Subscriptions vs. one-off payment

3 July 2005 | 27 Comments | Tags: , ,

Assertion: subscriptions are better, but people generally don't like them.

Consider the iTunes music store, unlike Yahoo music or Napster they offer a very simple fixed price of $1 per song. Whatever you buy is yours to keep. All other online music stores offer a subscription based plan where you pay $5 - $10 per month and download as much as you like, or some hybrid scheme.

One might thing that the subscription model is more popular. After all, it's better value, just think: unlimited songs!

Wrong! Even people that buy more than 10 songs per month prefer the simple iTunes buy-once model. People like to feel in control. People also like to any kind of commitment. Finally, people don't want things to be too complicated. Keep it simple!

Subscription services do sell in some scenarios. Take online role-playing games (please!): something like Sony's Everquest charges $40 for the game and then another $15 per month on top of that, yet is hugely popular. Some players spend hundreds of hours online, fighting monsters, completing quests, building up their virtual character, earning fake money (even more fake than the so-called real money) and so on. Everquest is designed to hook people into "just one more quest" and keep them playing and paying for-"ever". It works, too: Sony was astonished at how much money they made off Everquest.

Some smart new online games (most notably Guild Wars) have figured out that they can reach a much larger market (and more money) by not charging a subscription fee. Instead they'll release an "expansion pack" every few months and steal people's money that way.

Yoga teachers are notoriously bad at business. Most yoga classes I see advertised around the University want people to sign up for a 10-session course, or something of the sort. Most people I've talked to don't want to make that commitment and therefore end up not doing any yoga at all. Major untapped potential!

These yoga courses mean well, of course. People won't get any real benefit from just a single yoga session here and there. If someone really wants to improve their bodily and mental condition, it is best if they do two 90-minute sessions per week. But, low and behold, no one wants to do that. People don't care about themselves.

The same hold true with chanting the maha-mantra. Great if someone utters the mantra once. However, Krishna is most pleased when we make a commitment to chant a fixed amount each day. When Krishna is pleased we automatically also get satisfied. It's like watering the root of a tree. Again, the subscription model benefits all parties. And yet again, few people want to make the commitment.

Solution: I plan to offer a 40-minute yoga class once a week for a one-off one pound fee/donation and then follow that with some (free) chanting and philosophy, for those that are so inclined.

Scheme: attract people with something that they think they want, though it won't really benefit them and then make it as easy as possible for them to take to something that actually will give unlimited benefit.

Creating culture

30 June 2005 | 2 Comments | Tags: ,

Think back just 10 years. It used to be the case that only special people created culture. No normal person could hope to be called "author", "filmmaker", or "musician". This is rapidly changing. I was listening to a podcast discussion on digital lifestyle and one on maximizing your blogging strategies. One of the speakers mentioned that anyone can now consume as well as create media. Anyone can publish a blog, anyone can podcast, anyone can create a film (okay, a fair amount of disposable income is needed, but, believe me, digital filmmaking is much cheaper than you might think). The shape of our culture is in the hands of everyone.

Great danger: given power to the masses will lead to chaos! The masses are dumb! They don't know what's good for them! This will just lead to more porn on the Internet.

Great opportunity: show the masses how to be Krishna conscious and a cultural revolution can spring for the bottom-up. Distribute enough books (and get people to actually read and then post their realizations on the blogs) and BOOM! It's the talk of the city/nation/world.

Great difficulty: information overload! With everyone creating content how do you choose which content to consume? What will make the Krishna conscious content stand out from the throngs of useless posts? How to get someone to pay attention for more than 5 seconds?! How to get someone to commit to a process of self-realization for more than 1 day?

On powerpoint presentations

25 June 2005 | 3 Comments | Tags: , ,

Powerpoint is ubiquitous in the commercial world. Seemingly every talk, presentation, meeting, lecture and discussion must be accompanied by a presentation. There are some good presentation and very many bad ones. Michael Hyatt, the CEO of Thomas Nelson Publishers, the largest Christian publishing company in the world, has given a few useful guidelines on effective presentations in his blog.

In the devotee world presentations are very rare. The Vedic process of dissemination of knowledge is by sabda, or sound. Up until 5000 years ago no one even saw the need to write anything down. Western education, on the other hand, is almost completely visually focused. Many people, myself included, find it somewhat difficult to switch between the two paradigms.

My spiritual master tried giving a presentation in Wellington using a projector and a few simple presentation slides. He was blown away by the result. The attentiveness, retention, quality of questions asked afterwards, all were phenomenal. The audience even applauded afterwards. All because of having a projector screen to look at.

I suggest combining the transcendental sound vibrations of Vedic knowledge and expertly constructed visual imagery of the western world. The result: ultimate learning experience.

(Side note from Michael Hyatt that I also agree with: dump PowerPoint and use Apple Keynote instead. It produces much better looking presentations. Unlike Microsoft, Apple software has a sense of style.)

Comment spam combat

18 June 2005 | 14 Comments | Tags:

Just got hit with 7 comment spam postings for an online casino. So now: no more comment spam allowed on this website. After some googling I found what looks like the best spam protection plugin for WordPress. Hashcash makes the commenting client webbrowser generate a hash code before allowing the comment to be submitted. Simple, effective and totally transparent to the user. I like it.

Why computers are hard to use (part two)

22 May 2005 | 1 Comments | Tags:

Taking off my researcher hat for a moment: the real reason computers are difficult to use is simply that they are very, very, very complicated. No one expects a F-22 fighter jet to be easy to use, what makes a home computer any different (expect for the price, of course)? (note: the F-22's computer system performance is comparably to a high-end PC, but the plane costs one-hundred thousand times more)

The chips at the heart of modern PCs contain around 50 million transistors and an advanced operating system, like Windows XP, that runs on these chips took 100 million man-hours to create. The computers we use today are the most complicated machines humans have ever created (at least in the last 5000 years of history that we currently have access to). If something is so complicated we should not be astonished it is difficult to use. Quite the contrary, it is amazing that we can do anything at all with them without requiring years of study and training.

Why computers are difficult to use

19 May 2005 | 4 Comments | Tags:

I was having a discussion with fellow researchers in an academic writing module. We were discussing the difficultly of evaluating our research against some objective criteria. Three of people??(TM)s PhD projects are about improving the ease of performing a certain task (e.g. building an ontology). However, the measure ??oeease?? a series of usability tests are required. HCI however, is something this computer science department does not teach (at all). It is not ??oehard-code engineering?? enough.

Ultimately, these students may end up changing what they do so that they come up with a research hypothesis that is easier to prove. I think this is a major flaw in the way research is conducted. Everything is far too focused on evaluation, evaluation, evaluation. Usability is difficult to objectively evaluate, so most research ends up avoiding usability altogether. The result: completely unusable software that bewilders the average human being.

If only we could relax the so-called objectivity of modern science and introduce some subjectivity. Scientists would be more inclined to the process of improving their subjective state of consciousness and computers might actually become easy to use.