Semantic wiki, large machine learning and rules

Ten years ago the semantic web was no more than an ambitious speculation for there lacked a knowledge base providing enough reliable semantics to the content of the internet. Since six years ago we have been having the Wikipedia and researchers soon found that it can be utilized as the general knowledge base for semantic web. Together with large scale machine learning, the potential of semantic web is being unleashed. Currently in the Institute AIFB of University Karlsruhe, Germany, Semantic MediaWiki is under heavy development and there’s lots happening in this area. Here is the slides of a presentation made by the Ph.D candidate Mr. Denny Vrandecic from university of Karlsruhe. The presentation was made in the Semantic Web 2.0 Conference, 2007, Seoul. More research papers can be found in the AIFB home page. Ontology modeling is in Drools’ roadmap, hopefully in the future there will be a Drools module which can effectively and efficiently extract and make use of the semantics flowing in the web.

The Semantic Web in Ten Passages written by Prof. Harold Boley well describes what the Semantic Web is, which problem it tries to attack and the challenge it faces. A recommended reading if you are not familiar with the Semantic Web.


Comments are closed.