Into week 3 actually, I’ve been reading a few online blogs and articles and it seems like much of what Tim Berners-Lee (brought you such products as the WWW) promised concerning the semantic web has been in the works for a long time and has even failed a few times. Kind of disconcerting, no?
It seems like many people are stuck in the, “Let’s place content into its proper context before creating applications for it”, mentality. I’m starting to think that this semantic web idea is loosing steam. :-/
First off I’m not saying that I’m dumping the semantic web, no. I’ve seen some pretty cool applications using RDF that catalog information for the US concerning terrorist and their ties to other existing resources on file. (Profiles in Terror). What I’m having trouble with is how and why would an agent want to use an RDF formatted document when we can just write up custom apps to look at specific documents to get such data. Also, why cant we just use a combination of a database storage unit and a web service to promote sharing of our data? I don’t get it.
Here’s an example. The application I was looking over which was created by a Semantic web researcher, formatted mock data concerning terrorist their connections between factions, family ties, events, and other essential items a human agent might need to find/form connections between data. Since this was a semantic web application I’m assuming that all data was not centrally located, which means that the data came from different sources. Hereâ€™s my issue with such an approach, why not create a web service for each department that allows outside sources to query their data? Call it getTerroristByName(‘Name of the terrorist here’).
This would be a typical use case; Department “A” calls up the web service from Department “B”. Department “A” passes it the name of the terrorist and Department’s “B” web service would return all the data they contain with that specific string value. This approach would then eliminate the need for an RDF/OWL layer of complexity.