The original vision of the semantic web as a layer on top of the current web, annotated in a way that computers can "understand," is certainly grandiose and intriguing. Yet, for the past decade it has been a kind of academic exercise rather than a practical technology.
… The purpose of the semantic web is to enable computers to "understand" semantics the way humans do. Equipped with this "understanding," computers will theoretically be able solve problems that are out of reach today….
Sir Tim Berners-Lee, was presented in 2000 (see image below), the rest of the post will focus on the difficulties with this approach.Well, this remember me first application of computers; translate Russian texts. So English speaking word will understand first flight in space. If I remember well, this was in 1957 or software able to translate on demand we have only after 2001. And those translations have to appear in public after human corrections.
The same ideas appear here, Russian have software for server able to interpret “xml” pages according with user’s manifested preferences. It seems to me that we are still overwhelming language barriers forgotten that is hard to obtain lets say a walking robot.
Because the designers were shooting for flexibility and completeness, the end result are documents that are confusing, verbose and difficult to analyze.Again, designers, people searcing for something new, do not want to respect standards ;-)
If it is to be done be a centralized entity, then there will need to be Google-like semantic web crawler that takes pages and transforms them into RDF. This comes back to the issue we just discussed - having an automatic algorithm that infers meaning from text the way humans do. Creating such an algorithm may not be possible at all (and again begs the question of the need for RDF if the algorithm exists).:-) Google is the worst enemy….or Chinese programmers have something new…
For example, suppose there are representations of a book defined by Barnes and Noble and Amazon. Each has common fields like ISBN and Author, but there maybe subtle differences, i.e., one of them may define edition like this: 1st edition and the other like this: edition 1. This seemingly minor difference, one that people would not even think twice about, would wreak havoc in computers.
The only way to have interoperability is to define a common standard for how to describe a book. So having self-describing documents is not enough, because there still needs to be a literal syntactic agreement in order for computer systems to interoperate. The bottom line is that there needs to be a standard and an API....
The original vision of the semantic web as a layer on top of the current web, annotated in a way that computers can "understand," is certainly grandiose and intriguing. Yet, for the past decade it has been a kind of academic exercise rather than a practical technology. The technical, scientific and business difficulties are substantial, and to overcome them, there needs to be more community support, standards and pushing. This is not likely to happen unless there are more clear reasons for it.
Well, nice conclusion for a person that neglect networks administrator’s jobs.
Technorati Cosmos: other blogs commenting on this post
No comments:
Post a Comment