Friday, December 23, 2005

The Scenarios for the Semantic Web

The Semantic Web has been a very popular topic in the popluar press and technical and academic resesarch literatures. This promising extension to the current Web has raised a number of expectations. These expectations could be potrayed as three different perspectives of the Semantic Web as follows:[1]
  • a universal library, to be readily accessed and used by humans in a variety of information use contexts;
  • the backdrop for the work of computational agents completing sophisticated activities on behalf of their human counterparts;
  • a method for federating particular knowledge bases and databases to perform anticipated tasks for humans and their agents.

[1] Marshall, C. C. and Shipman, F. M. 2003. Which semantic web?. In Proceedings of the Fourteenth ACM Conference on Hypertext and Hypermedia (Nottingham, UK, August 26 - 30, 2003). HYPERTEXT '03. ACM Press, New York, NY, 57-66.

Monday, December 19, 2005

Bottom Down and Bottom Up

Categories systems undoubtely make the data, information and knowledge easily to combine and communicate each other. They are two kinds of categories systems:
  • Bottom down categories: the normal categories like Google's and Yahoo's.
  • Bottom up categories: emerging tagging systems, user-generated labels created in free association with an object like a photo or a webpage.

Sunday, December 18, 2005

Semantic Web Comes

The widespread use of internet technologies has created a wave of innovations and interactions leading to a major impact in almost every areas of human activities, albeit a large number of these applications were not foreseen thirty years ago, when the Internet architecture was developed. Besides, in only 14 years the World Wide Web became the greatest network accessible repository of human knowledge. It contains around 3 billion documents, which are accessed by over 500 million world-wide users. Moreover, the Internet is continuing to evolve to integrate other technologies such as multimedia and mobile technologies. On the other hand, information overload and poor aggregation of contents make the current Web inadequate for automatic transfers of information. As a consequence, the current Web is evolving for a new generation called Semantic Web, in which data and services are understandable and usable not only by humans but also by computers. "Moreover, the Semantic Web may further evolve to a Sentient Web, which is a further new generation of Web with capabilities for sentience. As a consequence, new technologies and concepts are being developed and new ways of utilizing or integrating older and new technologies are being constantly developed." [1]


[1] Mário Freire and Manuela Pereira, Eds., The introduction in: "Encyclopedia of Internet Technologies and Applications", Department of Informatics, University of Beira Interior, Portugal

Saturday, December 17, 2005

Challenge to Traditional Software Engineering

Microsoft and Google have combined with Sun to fund an new Lab in University of California, Berkeley to the development of new approach for software development.

In traditional software engieerning, work is completed in sequential stages starting from system concept design to development, assessment or test, deployment and operation. This approach is often too slow. Instead of infrequent, well-tested upgrades, code for internet services is continually being modified on the fly. This fix-it-as-you-go approach enables speedier deployment, but it also requires a large technical support group to make sure operations are not disrupted as bugs are resolved.

The message from Prof. David Patterson in UBC, the founding director of this lab: "right now, it takes a large company employing hundreds of really smart people to support internet service". The new approach from the lab is to develop technology that eliminates the need for such a large organization, opening up innovation opportunities for small groups or even individual entrepreneurs. It is obvious that the big software companies, such as Microsoft and Google, welcome the birth of this new approach. Thus they could save lots of human resources and financial resources for the company.

According to Prof.David Patternson, the new approach could achieve this goal by applying statistical mahine learning - the same technology used successfully in the recent antonomous vehicle grand challenge - to the development of computer systems.

Wednesday, December 14, 2005

Disciplines in Artificial Intelligence

I have seen a discussion about the disciplines (a.k.a. the academic fields) that comsist of the strong AI research (strong AI is the implemenation of real intelligence; weak AI is the mimic of the intelligent behaviours) on the usernet comp.ai.philosophy. Summarising the discussion, there are seven distinct disciplines as follows:
  • Philosophy
  • Mathematics
  • Computational Neuroscience
  • Electronic Engineering (a.k.a Eletronics and Computer Science)
  • Information Retrieval
  • Cognitive Psychology
  • Computer Science (emphasis on Artificial Intelligence)
To my knowledge, what I am wondering is whether the researches related to robotics are missing here. If not, which one or more of the disciplines above spots it? Or should we list it as the eighth? Robotics is the first impression for the people to understand the disciplines/technologies within the AI research. Although the research of robotics include many of the disciplines above, we could not decouple every piece of them. If that is case, I think the list should have only one item, i.e. mathematics, because mathematics is foundation of all of the disciplines listed, even for the robotics. The our discussion about the discipline in the Strong AI research is meaningless.

Monday, December 12, 2005

Pizza Order in the Semanic Web?!

Untamed fever has been annoying me for nearly 3 weeks. That is too bad! I even could not do any work at all. At this time, I really understand the truth value of health, in particular for a young energic man. I have no any ideas how the virus could visit me and stay here for so long time this time. For the sake of God, it happens during Xmas, this should-be-lovely season:(

I have to choose reading as my normal life because I really feel that is only thing I can do except eating the medicines. Occasionally I picked up this piece from some website (playing with sound please). It gives me a really strong injection at this time. One thing, it is very happy to see this kind of application people could imagine in our future life, which combines many technologies together to deliver the services in such an intelligent fashion. The customers would not only get satisfactory services but also get considerate "unexpected" care. That is so cool! But on the other hand, I doubt how many people are willing to accept this application. It seems to integrate too many sensitive data into the applications. In the other words, too many sensitive data should not be open in such "wildness". The Semantic Web make the existing web to be a hugh data storing base. The applications built upon it could communicate and understand each other with the great assistance of the large amount of available structured data. However, it does not mean that everyone could not easily hold the control of the data without the authorisation, in particualr the sensitive data. Taking this example, I believe that nobody expect that their medical records are unveiled for the public, even used by the third party in their real applications. Sound horrible! Maybe I exaggerate this issue.

Sunday, December 04, 2005

US Computer Science Eager for Funding

A report issued by the Defense Science Boards earlier this year describes computer science as having outgrown the Defense Department's capacity to support and fund the industry it largely created. In the absence of a transition strategy, the possibility of the United States losing its competitive edge in university research is now very real. Over the past four years, DARPA estimates that it has cut funding for university research for computer science by almost half. In congressional testimony, DARPA director Anthony Tether said that some research projects have moved out of universities and into industry, and described DARPA funding as having remained "more or less constant." At the NSF, funding has actually increased, though the portion of the proposals for computer science projects it sponsors has dropped from roughly one-third to 14 percent. "We are looking at a situation where perhaps 40 percent of the good proposals we get, we don't have the money to fund," said the NSF's Peter Freeman, citing the war in Iraq and natural disasters as higher priorities for government funding. The lack of university funding has shifted much of the burden for supporting research to the corporate realm, which typically only supports short-term projects that have an evident business value. Since 1999, MIT's computer science department has seen the portion of its funding supplied by DARPA drop from 62 percent to 24 percent. As research funding becomes a lower priority for a government grappling with an escalating budget crisis, there is widespread concern that other nations, particularly China, could supplant the United States in the next 10 years as the world's leader in technological innovation. Evidence of this trend can be found in the facilities that many companies are establishing in China, India, and other countries, as well as the appearance of large new universities, such as one China recently opened with a capacity for 30,000 students.

[Source: ACM TECHNews Volume 7 Issue 873, December 2, 2005 ]