Viewing entries tagged with 'internet'
I have created a proposal to create a new Question & Answer website for Hare Krishna devotees and need your help.
The proposed website will be built on the same software as stackoverflow.com, a hugely popular site where over seven million computer programmers help each other with difficult programming problems. On Stack Overflow the audience votes for the best answer. Answers with the highest number of votes automatically rise to the top, to be read first. People answering questions gain reputation from each "up" vote for their answer, encouraging them to answer questions well. I can see this proposed site turning into an equally amazing resource.
We need a certain number of people "following" the proposal before the people who run StackExchange see it as important enough to make it into a real website. So, please help out and click the "Follow It" button on this website and enter your email address:
Please forward this to all your devotee friends and get them involved, so we can get our Q&A site launched very soon.
Detailed information about what I'm trying to do here:
The primary aim of the site is to give devotees a way to get good answers to all kinds of questions, as well as for more learned devotees to share their knowledge. If an answer is online, then google can find it and it becomes a permanent record for the future. That is much better than a devotee answering the same question over and over again on various forums. It is also better for the person looking for the answer, because the voting highlights what the best answer is, placing it at the top.
I got the idea for this kind of website after a conversation with another devotee. We were talking about facebook and blog controversies and the inability of devotees to do much about them. I thought about this a lot and had the idea for a devotee Q&A site that could address both local and global controversies in an authoritative way that doesn't look like it is just one person's biased opinion. A good answer on such a site will visibly have the stamp of approval from a whole group of devotees. Then, a few weeks ago, this StackExchange thing came along and did all the hard work of designing such a site for us. So, I'm very keen to take advantage of this opportunity.
To my knowledge every other devotee attempt at an online community either gets neglected over time, limited to a few hundred people, or has a confusing interface which is too bewildering. This is not any individual's fault. The fact is that building a website that people will use for valuable high-quality social interactions is very difficult. The best essay on the topic is this one "A Group is its own Worst Enemy"
The author explains in great detail why so many social websites fail. The StackExchange model is exactly in line with the principles recommended to make a site successful. It is expertly designed with identity (once the site is launched, anyone that contributes needs to have an identity and is therefore accountable for what they write - no anonymous answering), voting (the community polices itself), reputation (a way to identify those members of the community that are in good standing) and a barrier to entry (you need a certain good reputation to be able to vote to determine what is a good answer). All this means that a website based on the StackExchange technology can still be useful and manageable with millions of users.
Please help make this amazing resource a reality by clicking "follow" on the proposal and writing some good and bad example questions to go on the site. Bad questions are those off-topic questions that we don't want appearing on the site to keep the site focused on topics related to Krishna consciousness. We need examples of such bad questions in the proposal stage to define what the site will be about.
(sign up to the site using your Gmail or Yahoo email address - that is what is meant by OpenID).
I've installed a new picture gallery software on this website. The old coppermine gallery was nice, but kind of clunky and didn't integrate well with my desktop applications. So, out it went. In its place it put a gallery called (simply) Gallery.
Take a look at the new gallery. You will find an archive of all the photos from the old gallery and a brand new picture album. The pictures in the new album were taken with a new Pentax K20d DSLR camera. I think you will notice these pictures are of significantly higher quality than all the previous images (shot with an old Minolta A1). If, for some reason, you want to look at the old coppermine gallery, that is still online here.
I attended a presentation by Michael Uschold of Boeing corporation Phantom Works. He talked about ontologies and semantic applications and the pressing need for them in today's software industry. I thought it was a great presentation. The following is a summary of his ideas from what I gathered while listening:
Dr. Uschold explained that when one is talking to someone about semantics one needs to sell its value. One should provide answers to the following questions: how will semantics help? Why is it better? What is the cost / benefits? Where will it fit in the architecture?
For example: there was a task at Boeing that required someone to write a report every three months. Writing the report involved the guy formulating a bunch of database queries, loading the results into Excel, messing around with the data a bit to shape it into the required form, and then writing the report. Altogether this was a 20-hour task. Doing the same task with ontology would be much quicker and produce a more accurate and more complete result. This is because ontologies uses the same schema (or language) for everything in the workflow. There is no need to convert between different data representations.
So, the value of ontologies for IT systems is that they allow systems to be more tightly coupled. In a traditional system the semantics are implicit. That is, they are hard wired into the system. You can't see them, you can't change them and you can't maintain them. So, more often than not, the system's requirements are out of sync with the applications'. For example: suppose someone creates a model (in UML) and write the code according to that model (in Java), then the requirements changes and the code is updated to match the new requirements, but no one ever updates the model. Over time the model and the code grow further and further apart until the model is all but useless. With an ontology the model is directly used to drive the system. Any change to the requirements requires a change to the ontology model and that, in turn, results in a change to the system. The result: everything is up-to-date all the time. This is the holy grail of semantic systems: a model driven architecture (remember that buzz word!).
The benefit of semantics is that they allow common access to information. Ontologies have unambiguous formal semantics. So, for example: in a semantic data warehouse, the ontology can provide a common schema for querying multiple databases; when doing system integration, the ontology allows for enterprise wide interoperability; and when capturing organizational knowledge, the ontology allows this knowledge to be stored, queried and accessed throughout the organization.
Speaking of querying: semantics enable better search. Semantic search goes a step beyond basic keyword-based search. It allows for detailed and very specific question answering and document retrieval.
Semantics offer many benefits in knowledge management. They allow organization to retain knowledge (e.g. when people retire), share knowledge and enable communities of practice (by e.g. informing people throughout the organization about who knows what). Semantics enable secure knowledge authoring and storage, since a rich ontology- or rule-based specification can accurately and reliably control everything that anyone is allowed to see and/or change. Semantic knowledge management would be especially useful for compliance with the Sarbanes-Oxly business process act (which all large organization are severely struggling to comply with, because it is so ridiculously complicated).
Semantic technology allows for lean and agile application development. With a database you are stuck with a given schema that was designed according to a specific problem scenario. Want to ask a different question? Then you would better get ready to spend at least two days rewriting all your SQL, or watch your performance go down the drain like nobody's business. The ontology allows for improved reliability, consistency and reusability. People still don't know how to reuse code. An ontology, however, is built for re-use.
So, in short, the benefits of semantic technology are: flexibility, flexibility, flexibility!
Ontologies do have some limitations, however. They can't do everything.
For one, scaling is a big issue. Reasoners currently have difficulty providing efficient a-box reasoning (answering questions about a large number of individuals/instances), as well as dealing with very large ontologies. Then there also is not much in the way of commercial application support for ontologies. The triple stores on the market are, for the most part, really, really dumb. They just store triples. If you want any reasoning support at all, you need to do it yourself.
Then there is workflow control. There needs to be more support for collaborative ontology development and change management. Large groups need to be able to concurrently build ontologies.
Another major issue that is limiting the adoption of semantic technology is that it is pretty much impossible for a normal person to understand. Take OWL restrictions, for example (please!). To describe a "big red ball", one needs to write: "class: ball, that has an anonymous superclass of which some values from are restricted over the property "hasSize" with the filler of the class of "Big" and some values from are restricted over the property of "hasColor" with the filler of the class of "Red". How bizarre is that?! The non-logician/non-geek just wanted to describe a ball, not get into the details of hopelessly complicated formal logic (and that was an easy example!). The complicated stuff really needs to happen behind the scenes.
Finally: we still need code. Ontology models can't yet drive the whole system. They are just a small part of a very big picture.
Questions that need answering
There are a few common questions that people in industry need to have answered before they will adopt semantic technologies. These include: how do I use my ontology in my architecture? How do I integrate this into my Eclipse framework? How does it link into my middleware? Which API(s) should i use? Will I have to roll-my-own all the time, or can I use some kind of IDE for ontologies?
So, what we really need is a book that covers: semantic middleware and semantic programming (i.e. telling the reading: "this is Jena and this is what it does", "this is Jess and this is what is does", etc.). That, coupled with an ontology programming interface that abstracts some of the APIs and programming tasks needed for ontology development, would go a long way towards enabling the adoption of semantic technologies in real-world applications.
An upcoming film about a Buddhist cook. This begs the question: why didn't they make a film like this with the Hare Krishna's instead? What's the "kitchen religion" Buddhism or Vaishnavaism (Krishna consciousness)?
There is obviously a market for and interest in this sort of movie. It seems like a great way to present our philosophy. Kurma prabhu are you listening?
This video site has a number of how to and self-help videos. For example, the hilarious: how to give a great man-to-man hug video. The videos are informative and often really funny. I could see this site becoming quite popular in the future. They have a niche beyond the usual youtube clone.
So, how about videos on: "how to offer obeisances", "how to ask a question to a senior devotee", "how to enter the temple room", "how to wear a dhoti", "how to eat prasadam", etc.
Such videos would be more accessible than a book and potentially even quicker to produce. All it would take is a video camera, some aspiring devotee actors and a computer with good video editing software (such as iMovie on the Mac).
(see also my previous post on video blogging)
The market for amateur video content on the Internet is growing rapidly. A creative and unique video weblog (also known as vLog) can attract millions of young viewers. Carana Renu also recently blogged about this opportunity.
Vlogging is the practice of posting low-budget home-made video on a blog website. User created content on the net is exploding. It even has big-media companies worried. People might stop watching their stuff (heaven forbid). People are already watching less TV and spending more time on the Internet.
More successful companies like Google have also entered the market. Google recently bought an up and coming video sharing website (YouTube) for $1.65 billion. Analysts estimate that the total market worth of online video in the United States alone will be $7 billion in 2010.
Technical background information
Media players like the Apple iPod, most smartphones and the newly introduced iPhone all play video content. Practically all computers can, of course, also play video. These devices can be set to automatically download video from the Internet for later playback on the go (using a process call podcasting and rss feeds). Many people watch vlog content they have subscribed to while e.g. on the train to work. They get the new episodes automatically delivered to their device as they are published. They watch the shows they want to watch, when they want to watch them. It is time shifted TV (with more interesting content).
Examples of vlogs
All successfully vlogs have a few things in common: they have their own unique style and personality and don't try to copy something that already exists. They are fresh, funny, creative and hosted by energetic presenters who genuinely care about what they are doing. A good length for a video 3 - 5 minutes.
Potential benefit of a Krishna conscious vlog
The Internet is the Maha-Brihad Mrdanga of our time. A normal mrdanga can be heard only in a small radius, while Bhaktisiddhanta Saravati's brihad mrdanga was book distribution. But books have to be printed and physically distributed. The Internet is not bound by physical limitations. Content can be distributed to an almost unlimited number of people, as long as they are willing to watch. More and more people are coming online every day, all seeking interesting and entertaining content. For example: 300,000 people download each episode of Rocketboom and 100 million videos are downloaded from YouTube each day! YouTube recently had an award competition for the best online videos of 2006.
Most Krishna conscious video content that I've seen freely available on the Internet is either low quality or very old. Some DVD material is much better, but suffers from very limited distribution and is too long for the short attention spans of today's youth.
A Krishna conscious vlog done correctly could have major local and global impact!
A show that covers issues local to the city will naturally attract views in that city and connect with the local population. Moreover, a member of the media (i.e. a person with a camera, distribution mechanism and audience) has a license to interview almost anyone. The vlog presenters could have access to the local celebrities, asking them relevant spiritual questions, selling them books and maybe even introducing them to Krishna consciousness. Another George Harrison would work wonders for KC. Of course, such a vlog would also help attract people to the local center or temple.
Vlog content could be be re-used globally as part of presentations anywhere in the world.
- Authoritatively smashing the bogus so-called spiritual activities
- Interviews with celebrities
- Ask a Hare Krishna
- Interviews with normal people on the street
- Highlighting the evils and dangers of birth, death, disease and old age
- Answering viewer submitted questions
- Teaching of yoga postures with KC commentary
- Enticing people with images of the prasasdam
- Video renditions of plays
- Covering of local news and current events from a spiritual angle
- Covering the festivals in a "news" style
- Teaching Krishna conscious philosophy and science (but keep it short, light and funny)
- The production quality could be too low, or the content could not be interesting enough for normal people and no one would end up watching the vlog.
- Someone creating a devotee vlog might not have had enough expose to the vlog medium to be able to style it in such a way that it is successful and attractive to audience.
- The hosts might will not have the necessary enthusiasm (which will come through in the video) and it will not be very attractive.
- Devotees might start, but, after a few episodes, loose interest/motivation and fail to produce new shows. The project would eventually fizzle out and disappear like many, many one-off hit videos that are popular one day and forgotten the next. The only successful vlogs are the ones that remain popular episode after episode.
Video blogging is a new upcoming style of media. A Krishna conscious vlog is a great, fun opportunity to communicate Krishna consciousness to millions of people. Someone should do it. In fact, everyone should do it.
I recently attended a round-table discussion with Grady Booch. Yes, the Grady Booch. What, you've never heard of him? If you studied Computer Science you are sure to have at least one book of his. He is one of the gurus of software development. He is now working as "chief scientist" for IBM.
You can also watch his recent Turing Lecture on "the promise, the limits and the beauty of software". It is very interesting.
Here some tidbits from the discussion with him :
Functional programming languages (like LISP, Scheme and SML) failed largely because they made it very easy to do very difficult things, but it was too hard to do the easy things.
The current buzzword for revolutionizing the software industry is SOA: Service Oriented Architecture. Grady calls it "Snake Oil Oriented Architecture". It is just re-branded "Message Oriented Architecture". The idea is to expose services and describe them using WSDL. This decreases coupling between systems. The service becomes the thing to test things against. The rest of the software application becomes a black box. A meta-architecture emerges: no software is an island onto itself.
It is a good idea, but the hundreds of WS* standards are so complicated and ill-defined that Microsoft's and IBM's implementations end up being incompatible. Lesser companies have no hope of ever implementing these crazy so-called standards. Just another scheme by the big companies to lock people into their software.
Bill Higgins' REST-style of SOA is much more promising. It builds upon the idea of something like HTTP instead of the complex transfer protocols of the WS-Vertigo world.
But back to software architecture...
The next big challenge in software architecture is concurrency. Raw clock speed has just about reached its physical limit. Chip companies are now putting multiple copies of the same CPU onto a single chip. The result is that applications can no longer just be run faster. They have to be run in parallel in some way. For example:
Dreamworks computer animation uses 10,000 serves in a production pipeline to render movies like Shrek 3. They will soon switch to using multi-core processor, but will have trouble distributing the work-load to take advantage of all these multiple cores.
The game company EA has the same problem. the Playstation 3 uses the Cell processor which has an 8-core CPU. How does on take advantage of all these 8 cores? EA segments their games into simple concerns: graphics on one core, audio on another, AI on yet another, etc. But the company admits that they are using only about 10% of the processor's capacity. So much potential computing power is wasted because it is really difficult to parallelize something as complex as a video game.
A typical Google node (and there are many around the world) consists of about 100,000 servers, but Google have a relatively "easy" problem. Search is "easy" to parallelize.
The perfect architecture doesn't exist. Good architectures have evolved over time. The first version of Photoshop wasn't very good, but it has undergone many rebirths. Amazon's computer systems can handle the loss of an entire data-center without a shopper ever noticing. It certainly wasn't always that way, but by gradual refinement they have built (and are continuing to build) a better and better architecture.
A typical EA game costs about $15 million just in development cost (that is without the cost involved in licensing, marketing, or distributing). Two kids in a garage can no longer create amazing software. They can have a great idea, but it has to evolve into something much more complex to be truly useful (on that note: Google is a company most seriously in need of adult supervision; way too much money in the hands of kids. They will soon face a mid-life crisis just like IBM has in the past and Microsoft currently is right in the middle of - just look at the state of Windows Vista).
Some principles for a good architecture:
- Crisp and resilient abstractions: use an object oriented view of the world, rather than algorithm based view of the world. Think about things instead of processes (this idea dates back to Plato).
- Good separation of concerns: that is in one sense obvious, but is also really hard to get right. It is very tempting to put a bits of logic in the wrong places in the architecture.
- Balanced distribution of responsibilities: no part of the system should dominate the entire architecture.
- Simple systems: the holy grail; very few software companies get to this point. The best systems are ones that actually decrease their amount of code over time. Good developers find ways to do the same functions more efficiently.
How to tell a good architecture when you see one? Ask the following questions?
- Do you have a software architect? (or, at most, 2 - 3 people sharing the role)
- Do you have an incremental development process? (not waterfall, but releasing a new version every week or so)
- Do you have a culture of patterns? (design patterns are beautiful and the best thing for creating good software)
If the answer to all three questions is "yes", then chances are you have a good architecture, or even if you do not have a good architecture at the moment, you will gradually evolve to having one.
Want to learn about good architecture? A good place to start is the 4+1 model view of software architecture. Software needs to be envisioned from multiple different perspective simultaneously. Just like their can't be just one 2D diagram outlining the plan for a house, there can't be a single view of a software application. [I might add that there can't just be a single view of the Universe. The Vedic literature therefore describes the Universe from 4 different viewpoints simultaneously.]
As for Web 2.0: it is a meme, an idea, a flag pole that you can hang almost anything off.
As for the Semantic Web? Developers don't understand normal software architecture properly, so what chance is there for them to understand something as complicated as semantically aware software? So, in Grady's opinion, the semantic web is a long, long way off.
"The livestock sector emerges as one of the top two or three most significant contributors to the most serious environmental problems, at every scale from local to global." (United Nations LEAD report)
Meat eating is destroying the planet!
In an article over on huffingtonpost, blogger Kathy Freston describes how a vegetarian diet can do more to reduce pollution than, for example, switching to a fuel-saving hybrid car like the Toyota Prius.
Reducing the environmental (and karmic) impact of the 10 billion animals that are killed each year in the United States (that's 300 deaths every second), is far more important than reducing the pollution caused by the 250 million passenger cars in the USA. Of course, there is nothing wrong with more fuel efficient cars, but vegetarianism should be the given much greater priority as a quick, easy, cheap and effective first step to save the planet.
Q: What is the best way to become and maintain a vegetarian diet?
A: Krishna consciousness automatically transforms one's consciousness so that one looses the desire for flesh eating. Repression the desire for meat is very difficult, but by experiencing a higher taste one is fixed in consciousness (see BG 2.59).
Google PageRank measures how important a website by how many other websites link to it. The more people link to a website, the more important it is. If no one is linking to a given website it will have a PageRank of 0, if practically everyone on the entire World Wide Web is linking a website it will have a PageRank of 10.
The more important a website, the more likely it is to appear higher up in the list of search results in Google. The higher a site is in the list of search results, the more people find and visit it. So, PageRank gives a good idea of how much impact a website is having.
(You can check the PageRank of any website using this tool.)
I've compiled a list of the various Krishna conscious blogs over the Internet with a Google PageRank score of 5. Five is the highest PageRank of any KC blog I've found. A few non-blog websites like Krishna.com and Iskcon.com have a PageRank of 6, but that's about it.
So then - behold - here the list of those Krishna conscious blogs with the most world-wide impact (according to Google):
- A.C. Bhaktivedanta Swami Prabhupada's letters posted each day in a blog format
- Blog of the Atma Yoga center in Brisbane, Australia
- Balarama Chandra Dasa's blog about the Krishna Camp at the Rainbow Gathering
- Candidasa dasa's blog, the one you are reading right here
- Devamrita Swami's blog about his travels
- Blog of the Gaura Yoga center in Wellington, New Zealand (although there is only one entry so far)
- Kurma dasa's blog about cooking and his travels
- Lilamayi Subhadra devi dasi's blog about a her activities in South Africa
- Blog of the Loft in Auckland, New Zealand
- Satoxi's blog about life as a Hare Krishna girl
- Visnumaya devi dasi's blog about her activities in New Zealand
(if you have a PageRank of 5, but are not on the list, please comment or email me and I'll add you).
The Internet is making it ever easier for "normal people" to produce "professional" content.
Blogging turns anyone into an online journalist. Podcasting allows people to create their own on-demand radio shows. Using Apple's iMovie the average guy or gal can even produce professional quality movies (though don't try that on a PC as this Apple Mac advert cleverly illustrates).
However, one medium still eludes the non-professional: books! It is surprisingly difficult to produce a professional looking book. Sure, anyone can print a crummy-looking plastic-comb bound collection words printed on cheap paper, but that is a lot different from a nice solid hardcover book. Those require some expertise to produce.
It is not just the print quality. I've seen some people publish books written using Microsoft Word. The result is not very nice. The poor quality of the page layout is instantly recognizable. It is with good reason that the archaic Latex document processing system is still almost universally used in academia to write scientific articles. Documents produced using Word just look downright ugly. Here are some more myths about desktop publishing.
There are just two choices for good professional quality page layout (such as would be used to create a modern high-quality book):
- Adobe InDesign (much recommended)
- QuarkXPress (used to be the market leader, but now is not nearly as good as InDesign, although still the number two)
Both these software packages help to perfect some critical aspects of document composition and layout: hyphenation, rivers of white space, orphans and widows. The sophisticated text optical kerning, tracking and optical margin alignment controls present in page-layout software can be used to eliminate visual errors and distractions.
Other software like Apple's Pages can also produce decent looking layouts and can do some basic kerning and tracking, but does not feature the automatic document adjustment features that are necessary to create a really good looking print job.
Now however some new companies have sprung up to help the normal person produce professional quality books. I was listening to an interview with Eileen Gittins, the CEO of Blurb. Blurb offers a desktop client and online service that makes producing really good looking books both cheap and easy. The company has just started out so the software is a little limited in terms of features and number of available templates, but it shows great promise. Note: a competing service called Lulu offers the printing and publishing, but without the aid in design and page layout.
Eileen gives the example of a businessman who sent out his 23-page business plan printed using cheap over-the-counter printing and got no response from prospective investors. He then took the exact same material and created a hardcover book (for a cheaper price) using Blurb's service and sent that out to some investors. The result: almost everyone phoned him back - mostly asking "how did you create this amazing book?". Eileen Gittins says:
In our society books have a real cultural pedigree. People don't throw books away. They do throw away things that appear like photocopies. So the shelf life of his book caused people to pick up the phone to phone him.
Does that sound familiar? Here an excerpt from the Srila Prabhupada Lilamrita:
When a librarian advised Bhaktivedanta Swami to write books (they were permanent, whereas newspapers were read once and thrown away), he took it that his spiritual master was speaking through this person. Then an Indian Army officer who liked Back to Godhead suggested the same thing.
So then: don't underestimate the value of well produced book. It can work wonders. Please, please, please do not (ever!) use Microsoft Word to publish anything. Learn good publishing if you can, or, if you can't, use a service like Blurb to produce high quality books. And finally: save the world.
I've created a system for memorizing Bhagavad-Gita verses called Sloka Raja.
You can go to the Sloka Raja website and see a series of verses hidden by a saffron veil. Each verse number is given in a tab along the top of the window. If you hover the mouse over the veil shrouding a particular verse, then that text's veil becomes transparent and you can "peek" at a single line of the original Sanskrit or the English translation. You can also click the mouse button and the text becomes permanently uncovered. Clicking again re-hides the verse.
Click the left and right arrows to scroll to other verses you want to memorize. You can also directly select and scroll to the verse you want to review by clicking on the appropriate tab on the top of the window.
Pressing the "change this verse" button on the bottom of the screen puts the verse display into "selection mode". Using this mode you can change the verse you want to learn to a different one. Simply select a new chapter and verse from the list in the window and that new verse will replace the current one. Press the "accept changes" button to switch back to the memorization view. In this way you can customize the view to learn different sets of verses as you desire.
The system always remembers your personal selection of verses. When you finish using the website simply close the window. There is no need to save. Sloka Raja remembers where you left off automatically. The next time you return the website recreates your personal view exactly as you left it. Everyone can choose their own personal set of verses to memorize on Sloka Raja. It remembers a different custom selection of verses for each and every user of the system.
Sloka Raja is available at the following URL:
If anyone notices any bugs or has any suggestions for improvement please let me know.
Sorry for any hiccups that might have occurred today during the transition. If you tried to send me email that didn't go through, then please resend. All should be working now, but if you spot any bugs, please let me know.
My reasons for switching were mainly surpasshosting's backup policy. I lost 2 weeks worth of content when their hard drive crashed. A backup only every two weeks is totally unacceptable. This new host is very highly regarded, from the reviews I've read. They also give me more space and bandwidth than I would have had with surpass hosting. Finally, they aren't located in (soon to be wiped off the face of the Earth due to hurricanes) Florida, but are in nice quiet state of Utah.
There was a talk on "The Web Structure of E-Government - Developing a Methodology for Quantitative Evaluation".
The researchers from University College London (UCL) used several statistical measures for evaluating government websites: worse case strongly connected components, incoming vs. outgoing link, path length between pages, etc. They compared their statistical measure with results from user evaluations. That is, they got a bunch of users together and measured how long it took them to find stuff on various website (both with and without using Google).
They tested the UK, Australia and USA immigration websites. The results:
- UK is best, both navigating the link structure and searching
- AU is terrible to navigate, but good to search
- USA is bad any way you look at it, but at least search will eventually find you what you are looking for.
Automated statistics don't tell you much.
More info at: www.governmentontheweb.org
This was followed by a talk by Ian Pascal Volz from the Johann Wolfgang Goethe University in Germany. He talked about "the Impact of Online Music Services on the Demand for Stars in the Music Industry".
His main (and interesting!) finding is that people tend to buy music they already know and like from online music stores like the iTunes Music Store. Peer-to-peer file sharing networks, on the other hand, tend to get people to try and discover new music. Virtual communities are somewhere in between the two.
People who buy music will not spend any money on something they don't already know and value. Even $1 per song is too high a price for a casual purchase. If you want people to discover your music and you are unknown it must be available for free.
On a related topic: when recording lectures on spiritual subject matter, please, please, please don't try to charge for them. No one will pay. Make them available for free. That way to the whole world will benefit.
And so ends the WWW2006 conference. Next stop Banff, Canada for WWW2007.
A presentation by some researchers from Karlsruhe, Germany was very interesting (well presented, too). They talked about their "semantic wikipedia", an extension to the popular MediaWiki that allows authors to express some semantics, i.e. to get at the hidden data within the articles.
The normal wikipedia only has plain links between articles. Nevertheless, it is the 16th most successful website of all time (according to alexa.com). However, in the semantic version every link has a type. Object properties map concepts to concepts and datatype properties map concepts to data values.
Why do it this way? Answers: adding these annotations is cheap and easy (no new UI), they can be added incrementally and there is no need to create a whole new RDF layer on top of the existing content, the annotations are right there in the wiki text.
This simple addition is enough to allow for powerful queries. You can create pages that automatically pull in all articles of a specific category, with a specific title and between a specific date range, for example. Checking for completeness because easier too: you can construct a query that tests if every Country has a Capital. If some countries come up that don't, those can be easily fixed.
The whole thing self-regulates. Each property has its own page in the wiki, so that people can suggest property types and eventually come to a consensus about which properties are the right ones to use.
The wiki can be imported into OWL and vica versa. The template system can also be leveraged to quickly create semantic annotations.
The whole thing is a win-win-quick-quick scenario (bit of an in-joke there).
Harith Alani presented his position paper on building ontologies from other online ontologies. He explained how building ontologies is difficult, so it is best to reuse existing knowledge bases, or, even better, completely automate ontology construction. The state of the moment is that there are quite a few ontology editing tools, but little support for reuse. Furthermore, these tools are build for highly trained computer scientists, not the average web-developer.
His idea is to combine three existing research areas:
Ontology libraries (e.g. DAML library, Ontolingua) and ontology search engines (e.g. Swoogle) can be used to located ontologies on the Internet.
Ontology segmentation techniques (like mine) can be used to cut smaller pieces out of these ontologies.
Ontology mapping techniques can be used to reassemble the pieces into new ontologies.
Result: instant custom ontology. However, to get this working in practice takes quite a bit of doing. He himself admitted that is was quite an ambitious undertaking. Good idea though.
Mustafa Jarrar (from Beligum) and Paolo Bouquet (from Trento, Italy) presented the next two papers. They talked about a very similar topic. Both were advocating linking ontology terms to dictionary / glossary definitions.
It was interesting two observe these two researcher's presentation styles. Paolo was very fast and frantic, very much unlike Mustafa who was very slow and relaxed, even when trying to hurry (Vata vs. Kapha, for those knowledge in Ayurveda).
Mustafa told of how he built a complex ontology for some lawyers, but, after he had gone through the trouble of carefully constructing this knowledge base, the lawyers found it to be too complicated to understand and threw everything expect the glossary part away. However, the did really like and appreciate having a sensible glossary of all kinds of law-related knowledge.
He defined this "gloss" as:
auxiliary informal (but controlled) account for the common sense perception of humans of the intended meaning of a linguistic term
The glosses should be written as propositions, consistent with the formal definition, focused on the distinguishing characteristics of what is being described, sufficient, clear, use supportive exampled and be easy to understand.
Advantages are that these glosses are highly reusable (very important for his lawyer clients) and that they are very easy to agree upon.
So everyone: link your ontology to WordNet (or something better)!
Paolo picked up the issue and talked about his WordNet Description Logic (WDL). An extension to DL that adds lexical senses to the vocabulary of logic. It allows for compound meanings. So, UniversityOfMillan is automatically inferred as University that hasLocation some Millan.
Using this type of dictionary-link makes it possible to check for errors by comparing the glossary definition to the logical semantics. If they don't match, a potential error can be flagged.
His system also allows for bridging and mapping between ontologies. If two ontology concept refer to the same dictionary definition, then that is a very good indication that they are describing the same sort of thing.
I was inspired to imitate the recent comments feature of Sitapati's blog. However, the side panels of this website design are already cluttered with all kinds of useful information. I didn't want to add yet another long heading. I have therefore created dynamic HTML pop-up display of the most recent comments. Check it out!
Tell me how you like it and let me know if there are any bugs.
Now the chance for up and coming semantic web developers to demo their killer applications. The apps that will revolutionize the Internet, on display.
Tim Bernes-Lee (who uses a Mac, by the way) showed his Tabulator RDF browser. He gave a brief talk and demo of the app. It gives an "outline" style view of RDF and asynchronously and recursively loads connecting RDF using AJAX technology. It follows the 303 redirects, follows # sub-page links, uses the GRDDL protocol on xHTML and smushes on owl:sameAs and inverse functional properties (the killer feature, apparently).
Some commented to me afterwards that they thought that no one should ever have to see the RDF of a semantic web application, let alone browse it. Oh well.
Then came DBin. Not just a browser, no, a semantic web rich client! It uses so called brainlets (HTMLS) and a new semantic transport layer (not HTTP) to dynamically query and retrieve RDF using peer-to-peer transfer.
Again, I'm skeptical. It is just (yet another useless) RDF browser that saves bandwidth by sending the data through a peer-to-peer network. But RDF file sizes aren't exactly huge and compression will do far more than peer-to-peer to help with bandwidth. This browser is solving a problem that doesn't exist.
Next up: Rhizome. A python-based app that allows one to build RDF applications in a wiki style. Is uses a Raccoon application server to transform incoming HTTP requests into RDF, evolve them using rules and uses schematron validation. In short, it is to RDF what Apache Cocoon is to XML. Or, in more understandable terms: you declaratively build your web-site using RDF for everything from the layout to the database.
Pity, of course, that no one uses Cocoon and this Rhizome system looks really complicated, despite being pitched at "non-techical folks".
At this point I left the semantic web demo session. My thinking: these guys are nuts.
It uses a shallow (read: simple) ontology to label areas of a web page according to their functional roles. It also creates a hierarchy of elements inside of each area or module. The third component of the system is a Finite State Automata for moving between functional states of the website.
Putting these three things together allows one to identify common trails of FSA transitions. That is, processes which users tend to perform regularly. Having identified these trails, one can cut out all the modules that do not contribute to the task. All useless clutter is eliminated from each web-page.
Result: mobile web surfing speed could be accomplished twice as fast as before and blind web surfing (using a screen reader) could be performed 4 times faster than before.
Future work: mining for workflows, using web services and analyzing the semantics of web content. Problems: coming up with standard way to describe the process and concept models. A system for semantic annotation of web content is needed.
I was impressed. It sounds like a really good idea. It takes three relatively simple ideas and combines them into something innovate and powerful. Nice.
Wendy Hall started off the day talking about the trials and tribulations of organizing the conference. She had to put up a ?£0.5 million deposit to secure the conference center three years in advance. She could have kissed her career goodbye, if this conference had not been a success.
Next Charles Hughes the president of the British Computer Society (BCS) spoke. He gave an utterly boring scripted speech about how computing needs to become a respected professional profession.
Carole Goble then spoke about the paper review process. The conference was super-competitive. 700 papers were submitted, over 2000 reviews issued, and only 84 papers accepted (11% acceptance ratio).
Thereafter came a panel discussion on the next wave of the web. Important people from research and industry talked about the semantic web. Business wants TCO figures, risk measures, abundance of skilled ontology engineers and stuff like that. Academia underestimated the amount of work necessary (and wants more grant money).
Ontologies can be used today: they are especially useful for unstructured information and to organize already structured information in database tables.
Tim Berners-Lee brushed off Web 2.0 as just hype. That's just AJAX and tagging. Folksonomy is not going to fly in the business world. The real, hard-core Semantic Web is where it's at. What's more: we're already there. We've reached critical mass, but just haven't realized it yet. All we need is for the right search engining to "connect the dots" and boom! Instant semantic web via network-effect (or something like that).
The right user interface is going to be the most difficult part. Browsers will need an "Oh yeah? Why?" button query the RDF and give a justification for any entailment.
"Don't think of the killer app for the semantic web, think of the semantic web as the killer app for the normal web"
The value of the semantic web will be universal interoperability and findability. We have more information than ever before and are spending longer trying to find stuff. The semantic web will help automate some of the "find stuff". The search engines of today aren't sufficient went searching for information on Exxon Mobile, for example. That will return millions of hits.
Tim: "search engines make their money making order out of chaos, if you give them order, they don't have a business. That's why they are not interested in the semantic web"
Take home message from the panel:
- "you ain't seen nothing yet"
- "a lot of education still has to go on. It needs to get simpler for the average business person and there needs to be a lot more investment"
- "we can already apply the first results in a business context"
- "it's a great simplifying technology"
My take: they are quite right, we have indeed not seen anything yet ... if nothing else they certainly succeeded in securing the next 5 years of grant money ...
The World Organization of Webmasters tutorial session offered a chance to take an exam to become a certified professional webmaster. I though, "what the heck": the exam normally costs $195 to take and here at the conference they are offering it for free, so I might as well give it a go.
The exam wasn't easy. One needed to answer 70% of the questions correctly to qualify as a professional webmaster. There were some tough questions. A typical question would be something like:
Which of the following is valid XHTML 1.0 / HTML 4.0 (mark all that apply):
a. <img src="image.gif" alt="the image" height=25 width=25 />
b. <strong><a href="link.html">click here</strong></a>
c. <DIV CLASS="style.css">text</DIV>
d. <img src="picture.jpg" alt="my picture" />
e. <hr><a href="page.html">next page</a><hr>
f. (none of the above)
Bill Cullifer was impressed with the exam results. Most people did extremely well. He commended that the individuals present were obviously the top people in the world in the Internet field.
I passed, of course. I'm now a WOW Certified Professional Webmaster.
David Leip from IBM and David Shrimpton from the University of Kent talked about Web 2.0. The Web 2.0 phenomenon is exemplified in the difference between mapquest and google maps, ofoto and flickr, britannica online and wikipedia, personal websites and myspace, stickness and syndication, etc. The value of a website can no longer be measured by how many people visit it. Instead people can subscribe to feeds off the website and get all the benefits without ever actually visiting the site.
Websphere is IBM's Java Enterprise Application Server. It's biggest competition no longer comes from products like BEA WebLogic, but instead from Amazon. Amazon offers people a virtual e-marketplace that handles all the accounting, advertizing, searching, buying, selling and refunds. All you have to do it set up the user account and use their APIs. Very easy and very cheap; very Web 2.0.
Another Web 2.0 phenomenon is the perpetual "beta". A product is never finished, but rather is continuously re-evaluated and refined. Updates can be pushed to all users, since the entire application lives on the web.
New application create buzz by being genuinely fun to use. Google Maps delights its visitors. The wow-factor makes people stay loyal. However, as soon as things start to go wrong, people will very quickly switch to using another service that works. Word of mouth is the way! Google never advertise; they don't have to.
Web 1.0 was all about commerce, Web 2.0 is all about people (what Web 3.0 will be is still written in the stars). The myriad number of WS* standards may be useful and necessary for the enterprise, but any normal person will be totally bewildered by WS*-standards vertigo. Web 2.0 is about the people taking back the Internet.
In the Web 2.0 world accessibility matters. Don't use red and green together on a web page, some people are color blind. Use xHTML and CSS, some people use screen readers.
Exclusive, hierarchical, fixed taxonomies are out. Flexible, flat, multi-tag, emergent folksonomies are in.
Microformats decree: Humans first, machines second. They are the lower-case semantic web. They use simple semantics, adding to the stuff that's already there, instead of inventing this hugely complicated description logic stuff (that I'm working on). Microformat are cheap, easy and, as long as people agree on them, they can be just as powerful and interoperable as if you had created a full XML-Schema monster. More at microformats.org and programmableweb.com.
MacZOT.com Fans want Pzizz because 'According to the National Sleep Foundation, sleep deprivation and its effect on work performance may be costing U.S. employers some $18 billion each year in lost productivity. Another study pushes this cost to over $100 billion.' - link to full article
I posted that advert clip so that I get a free version of that software (via BlogZOT). It uses NLP to enhance the sleeping/napping process. Taking frequent naps allows one to sleep less overall which means more time for doing pure-goodness activity.
We have a server hard disk crash and we could not restore the files from the old drive. We have restored the files from the only available backup.
The available backup for your domian was backed up only on april 22nd.
Yup, the company that I use to host this website (Surpass Hosting) has had a major hard drive failure. They lost a week's worth of their client's data. I'm seriously considering moving to a different hosting provider that is more reliable. Anyone have any suggestions?
In any case, I've managed to restore most of the data from records I've kept on my personal machine and the nice cache features of MSN Search and Yahoo search (Google's cache isn't updated frequently enough, and other search engines are updated too frequently). It will however be some time before I have everything back to working order. Expect a few glitches here and there.
I just listened to an interesting interview with John Barrow, a cosmologist and mathematician who talks about his book: The Infinite Book : A Short Guide to the Boundless, Timeless and Endless.
He explains how the Universe may or may not be infinity and outlines a theory where our particular Universe is finite, but there exists an infinitely old realm of unlimited parallel universes beyond ours. We will however, never know for sure, since, in order to get information from those other Universes that information would have to travel faster than the speed of light, which is, of course, impossible (according to Einstein).
This theory sounds remarkably similar to the view of the Universe given in the Srimad-Bhagavatam. Maha-Visnu is in the infinite spiritual causal ocean where time does not exist and generates unlimited finite Universes just like ours.
Barrow also explains how there are different sized infinities (as discovered by Galileo Galilei). There are, in fact, an unlimited number of infinities, each larger than the next. The infinite infinity is mathematically impossible (as shown by Georg Cantor, but hypothetically possible for a meta-physical being such as God.
There are several statements in Vedic literature that the spiritual energy is three times larger than the material energy (SB 2.6.20 and Caitanya Caritamrita Madhya-lila Chapter 21 Verses 51, 55, 56, 57, 87). Devotees always told me that these statements were not to be taken literally. I was however never satisfied with that explanation. However, using Cantor??(TM)s mathematics of infinite sets, it is indeed perfectly reasonable to talk about multiple differently sized infinities.
I??(TM)ve just fixed some more bugs in this website. For some reason the last few lines of some of my modified WordPress theme files got cut off and replaced with a garbled server error message. Weird (or: hackers!?). If this website looked a bit strange over the past few days, then that was the reason.
Anyway, it's all working again now (and more secure, too). Enjoy.
(Note: I've also changed the layout slightly. Tell me what you think.)
I've just fixed some bugs in this website. Some dead links in the picture gallery now work. The entire site should now display better in Internet Explorer ... and if you are still using Internet Explorer:
World: I'd love to make my web site smarter, link things together more intelligently.
Semantic Web Research Community: Sure! You need a generalized framework for ontology development.
World: Okay. That'll help me link things together more easily?
SWRC: Even better, it will lead to a giant throbbing robot world-brain that arranges dentist's appointments for you! Just read the Scientific American article.
World: Will that be a lot of work?
SWRC: No. But even if it is, we will blame you for being too stupid to understand why you need it.
World: Huh. I guess so. But I don't understand why I need it, exactly.
SWRC: That is because you are too stupid. It's fine, we have your best interests in mind.
World: I don't want to nag, but while I read a book on set theory, how about those fancy links?
SWRC: Well, if you insist, and can't wait, there's always XLink.
World: Aha. That looks handy...except, oh, there's no easily available implementation. And I'm not really sure what it's supposed to do.
SWRC: That is because you are lazy and stupid.
World: Ah well. Do you think I should apply for grants for the development of my little web site Ftrain.com? Just enough for a monthly unlimited Metrocard would be a help.
SWRC: We will have all the grants! Be gone with your bachelor's degree from a second-tier private liberal arts college! And where is your RSS feed?
SWRC: Slacker! Bring me more graduate students, I am hungry!
These pictures are of Guruvani's and Bhumna Krishna's visit to my parent's house in Germany shortly before jetting off to a far away country.