SEARCH
Advanced Search
ABOUT
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors
NEWS
Cover Stories
Articles & Papers
Press Releases
CORE STANDARDS
XML
SGML
Schemas
XSL/XSLT/XPath
XLink
XML Query
CSS
SVG
TECHNOLOGY REPORTS
XML Applications
General Apps
Government Apps
Academic Apps
EVENTS
LIBRARY
Introductions
FAQs
Bibliography
Technology and Society
Semantics
Tech Topics
Software
Related Standards
Historic
|
| XML and 'The Semantic Web' |
[July 31, 2002] W3C Web Ontology Working Group Releases Working Drafts for OWL Semantic Markup Language. Three initial working draft documents on 'OWL' have been published by the W3C's Web-Ontology Working Group (WebOnt). OWL is a semantic markup language for publishing and sharing ontologies on the World Wide Web. OWL is derived from the DAML+OIL Web Ontology Language and builds upon the Resource Description Framework. The designers expect that OWL will support the use of automated tools which "can use common sets of terms called ontologies to power services such as more accurate Web search, intelligent software agents, and knowledge management." The OWL Web Ontology Language is being designed "in order to provide a language that can be used for applications that need to understand the content of information instead of just understanding the human-readable presentation of content. OWL facilitates greater machine readability of web content than XML, RDF, and RDF-S support by providing an additional vocabulary for term descriptions." The Feature Synopsis for OWL Lite and OWL introduces the OWL language. The OWL Web Ontology Language 1.0 Reference provides a systematic, compact and informal description of all the modelling primitives of OWL. An OWL knowledge base is a collection of RDF triples as defined in the RDF/XML Syntax Specification; OWL prescribes a specific meaning for triples that use the OWL vocabulary. The Language Reference document specifies which collections of RDF triples constitute the OWL vocabulary and what the prescribed meaning of such triples is. The OWL Web Ontology Language 1.0 Abstract Syntax document describes a high-level, abstract syntax for both OWL and OWL Lite, a subset of OWL; it also provides a mapping from the abstract syntax to the OWL exchange syntax. [Full context]
[March 08, 2002] W3C Publishes Web Ontology Language Requirements Document. The W3C Web Ontology Working Group has published an initial working draft document outlining requirements for the Ontology Web Language (OWL) 1.0 specification. The draft document "specifies usage scenarios, goals and requirements for a web ontology language. Automated tools can use common sets of terms called ontologies to power services such as more accurate Web search, intelligent software agents, and knowledge management." An 'ontology' in terms of the WG charter "defines the terms used to describe and represent an area of knowledge. Ontologies are used by people, databases, and applications that need to share domain information, where a domain is just a specific subject area or area of knowledge, like medicine, tool manufacturing, real estate, automobile repair, financial management, etc. Ontologies include computer-usable definitions of basic concepts in the domain and the relationships among them... An ontology formally defines a common set of terms that are used to describe and represent a domain. The WD specification motivates the need for a Web ontology language by describing six use cases. Some of these use cases are based on efforts currently underway in industry and academia, others demonstrate more long-term possibilities. The use cases are followed by design goals that describe high-level objectives and guidelines for the development of the language. These design goals will be considered when evaluating proposed features." [Full context]
[February 09, 2001] Semantic Web Activity Launched by W3C. On February 09, 2001, The World Wide Web Consortium formally inaugurated a Semantic Web Activity within the W3C Technology and Society Domain. The 'Semantic Web' is "a vision: the idea of data on the Web defined and linked in a way that it can be used by machines for automation, integration and reuse. The Web can reach its full potential only if it becomes a place where data can be shared and processed by automated tools as well as by people." Part of this vision is "developing an environment to permit each user to make the best use of the resources available on the Web." The Semantic Web Activity is being launched as a successor to the W3C Metadata Activity. Key participants in the new activity include, in addition to W3C Director Tim Berners-Lee, (1) "Eric Miller (W3C, Activity lead), (2) Ralph Swick (W3C, Development Lead), (3) Dan Brickley (University of Bristol, RDF IG Chair and RDF Core WG co-chair) and (4) Brian McBride (HP, RDF Core WG co-chair). Planned activities of W3C toward development of the Semantic Web vision are described in the W3C Semantic Web Activity Statement. We read: "For the Web to scale, programs must be able to share and process data even when these programs have been designed totally independently. The Web can reach its full potential only if it becomes a place where data can be shared and processed by automated tools as well as by people. The Semantic Web Activity, to this end, has been established to serve a leadership role, in both the design of enabeling specifications and the open, collaborative development of technolgies that support the automation, integration and reuse of data across various applications. To faciliate this goal, the Semantic Web Activity builds upon the existing foundation work accomplished by W3C Metadata Activity with the following additional objectives: (1) Continue the work of the RDF Interest Group. The RDF Interest Group will coordinate implementation and deployment of RDF and will provide liaison with new work in the W3C and the wider community on matters relating to RDF. (2) Undertake revisions to the RDF Model and Syntax Recommendation. (3) Complete work on the RDF Schema specification. This Working Group group will incorporate the results of the on-going RDF implementation experience and consider directions established by the XML Schema Candidate Recommendation. (4) Coordinate with W3C initiatives focussed on defining semantics for supporting Web technologies. This includes P3P, CC/PP, XML Protocols, WAI, and other infrastructure for remote services. (5) Coordinate with selected non-W3C initiatives and individual activities working on Semantic Web technologies. This coordination includes, but is not limited to, DCMI, DAML, OIL, SHOE. The current international collaboration between DAML and OIL groups on a Web ontology layer is expected to become a part of this W3C activity. The goals of coordination are to ensure the generality of the solution, to provide solutions and experience, to prevent arbitrary divergence, and to ease adoption of the technology in related fields. (6) Perform advanced development to design and develop supporting XML and RDF technologies. The development project is intended to facilitate distributed collaboration with a specific intent to increase the level of automation of the W3C Web site and to develop open-source RDF infrastructure support modules."
[April 11, 2001] "The Semantic Web. A new form of Web content that is meaningful to computers will unleash a revolution of new possibilities." By Tim Berners-Lee, James Hendler, and Ora Lassila. In Scientific American Volume 284, Number 5 (May, 2001), pages 34-43. Cover story title: 'Get the Idea? Tomorrow's Web Will.' "Most of the Web's content today is designed for humans to read, not for computer programs to manipulate meaningfully. Computers can adeptly parse Web pages for layout and routine processing -- here a header, there a link to another page -- but in general, computers have no reliable way to process the semantics: this is the home page of the Hartman and Strauss Physio Clinic, this link goes to Dr. Hartman's curriculum vitae. The Semantic Web will bring structure to the meaningful content of Web pages, creating an environment where software agents roaming from page to page can readily carry out sophisticated tasks for users. Such an agent coming to the clinic's Web page will know not just that the page has keywords such as 'treatment, medicine, physical, therapy' (as might be encoded today) but also that Dr. Hartman works at this clinic on Mondays, Wednesdays and Fridays and that the script takes a date range in yyyy-mm-dd format and returns appointment times. And it will 'know' all this without needing artificial intelligence on the scale of 2001's Hal or Star Wars's C-3PO. Instead these semantics were encoded into the Web page when the clinic's office manager (who never took Comp Sci 101) massaged it into shape using off-the-shelf software for writing Semantic Web pages along with resources listed on the Physical Therapy Association's site. The Semantic Web is not a separate Web but an extension of the current one, in which information is given well-defined meaning, better enabling computers and people to work in cooperation. The first steps in weaving the Semantic Web into the structure of the existing Web are already under way. In the near future, these developments will usher in significant new functionality as machines become much better able to process and 'understand' the data that they merely display at present... Two important technologies for developing the Semantic Web are already in place: eXtensible Markup Language (XML) and the Resource Description Framework (RDF). XML lets everyone create their own tags -- hidden labels such as or that annotate Web pages or sections of text on a page. Scripts, or programs, can make use of these tags in sophisticated ways, but the script writer has to know what the page writer uses each tag for. In short, XML allows users to add arbitrary structure to their documents but says nothing about what the structures mean . Meaning is expressed by RDF, which encodes it in sets of triples, each triple being rather like the subject, verb and object of an elementary sentence. These triples can be written using XML tags. In RDF, a document makes assertions that particular things (people, Web pages or whatever) have properties (such as 'is a sister of,' 'is the author of') with certain values (another person, another Web page). This structure turns out to be a natural way to describe the vast majority of the data processed by machines. Subject and object are each identified by a Universal Resource Identifier (URI), just as used in a link on a Web page. (URLs, Uniform Resource Locators, are the most common type of URI.) The verbs are also identified by URIs, which enables anyone to define a new concept, a new verb, just by defining a URI for it somewhere on the Web... this is not the end of the story, because two databases may use different identifiers for what is in fact the same concept, such as zip code. A program that wants to compare or combine information across the two databases has to know that these two terms are being used to mean the same thing. Ideally, the program must have a way to discover such common meanings for whatever databases it encounters. A solution to this problem is provided by the third basic component of the Semantic Web, collections of information called ontologies. In philosophy, an ontology is a theory about the nature of existence, of what types of things exist; ontology as a discipline studies such theories. Artificial-intelligence and Web researchers have co-opted the term for their own jargon, and for them an ontology is a document or file that formally defines the relations among terms. The most typical kind of ontology for the Web has a taxonomy and a set of inference rules... The real power of the Semantic Web will be realized when people create many programs that collect Web content from diverse sources, process the information and exchange the results with other programs; the Semantic Web promotes this synergy: even agents that were not expressly designed to work together can transfer data among themselves when the data come with semantics."
"Semantic Web Enabled Web Services." By Dieter Fensel (University of Innsbruck) and Christoph Bussler (Oracle Corporation). From the Resources Collection of the Semantic Web Services Initiative (SWSI). April 2003. 36 psges (slides). "Web Services will transform the web from a collection of information into a distributed device of computation. In order to reach full potential, appropriate description means for web services need to be developed. For this purpose we developed a full-fledged Web Service Modeling Framework (WSMF) that provides the appropriate conceptual model for developing and describing web services and their composition. The philosophy of WSMF is based on the principle of maximal de-coupling complemented by scalable mediation service. This is a prerequisite for applying semantic web technology for web service discovery, configuration, comparison, and combination. This presentation provides a vision of web service technology, discussing the requirements for making this technology workable, and sketching the Web Service Modeling Framework..." See also the earlier (2002) paper "The Web Service Modeling Framework WSMF" by the same authors. The Semantic Web Services Initiative (SWSI) is "an ad hoc initiative of academic and industrial researchers, many of which are involved in DARPA and EU funded research projects. The SWSI mission is threefold: (1) to create infrastructure that combines Semantic Web and Web Services technologies to enable maximal automation and dynamism in all aspects of Web service provision and use, including (but not limited to) discovery, selection, composition, negotiation, invocation, monitoring and recovery; (2) to coordinate ongoing research initiatives in the Semantic Web Services area; (3) to promote the results of SWSI work to academia and industry..."
[January 25, 2002] Seminar on Rule Markup Techniques for the Semantic Web. A one-week seminar on 'Rule Markup Techniques' will be hosted by the Dagstuhl International Conference and Research Center for Computer Science (Wadern, Germany) on February 3-8, 2002. "Rule systems (e.g., extended Horn logics) suitable for the Web, their (XML and RDF) syntax, semantics, tractability/efficiency, and transformation/compilation will be explored. Both derivation rules (which may be evaluated bottom-up as in deductive databases, top-down as in logic programming, or by tabled resolution as in XSB) and reaction rules (also called 'active' or 'event-condition-action' rules), as well as any combinations, will be considered. This 'Rule Markup Techniques' seminar aims at bringing together the classical- and Web-rule communities to cross-fertilize between their foundations, methods, and applications. The long-term goal is a Web-based standard for rules that makes use of, and is also useful to, the classical rule perspective. The seminar is expected to contribute to some open issues of recent proposals such as Notation 3 (N3), DAML-Rules, and the Rule Markup Language (RuleML). Furthermore, by studying issues of combining rules and taxonomies via sorted logics, description logics, or frame systems, the Seminar will also discuss the US-European proposal DAML+OIL. Two particular issues that will be addressed during this seminar are efficient implementation techniques (e.g., via Java-based rule engines) and major exchange applications (e.g., using e-business rules)." Conference organizers include internationally-recognized authorities on rule and agent markup languages: Harold Boley (DFKI Kaiserslautern, Germany), Benjamin Grosof (MIT Sloan School of Management, USA), Said Tabet (Nisus, USA), and Gerd Wagner (Eindhoven University of Technology, The Netherlands). [Full context]
[August 16, 2001] W3C Web Ontology Working Group Formed to Extend the 'Semantic Reach' of XML/RDF Metadata Efforts. A posting from Dan Connolly to the W3C 'www-rdf-logic' mailing list announces the formation of a new Web Ontology Working Group within W3C. The Web Ontology (WebOnt) Working Group has been chartered to design a web ontology language "that builds on current web languges that allow the specification of classes and subclasses, properties and subproperties (such as RDFS), but which extends these constructs to allow more complex relationships between entities including: means to limit the properties of classes with respect to number and type, means to infer that items with various properties are members of a particular class, a well-defined model of property inheritance, and similar semantic extensions to the base languages. The web ontology language must support the development and linking of ontologies together, in a web-like manner. The products of this working group must be supported by a formal semantics allowing language designers, tool builders, and other 'experts' to be able to precisely understand the meaning and 'legal' inferences for expressions in the language. The language will use the XML syntax and datatypes whereever possible, and will be designed for maximum compatibility with XML and RDF language conventions." [Full context]
[September 27, 2000] 'The Semantic Web' (a phrase coined by Tim Berners-Lee, as far as I know) serves also as a convenient title for this collection of references to projects that focus upon (markup language) "semantics" in the context of the Internet. Such endeavors move beyond the level of the particular descriptive meta-markup syntax formalized in SGML, and in its first popular application, HTML. SGML's studied disinterest in (and anathematizing of) primitive semantics (datatypes, relations) was arguably a matter of grand equivocation in the name of separating the specification for logical/structural representation from specification of application level processing semantics. HTML's fundamental disregard for the core creedal elements of SGML (hard-coding display semantics directly into the application) contributed equally to the (chaotic, retarded) advance of generalized descriptive markup technologies. Modest efforts to shore up SGML/XML at the semantic level are [Fall 2000] seen in the W3C XML Schema work (viz., XML Schema Part 2: Datatypes) and in several separate work initiatives, some of which are referenced below. See especially the web site SemanticWeb.org for references, and in a parallel vein, "Conceptual Modeling and Markup Languages." Using syntax (without semantics) works, sort-of, in only one case; otherwise, design by syntax is rather like counting on your ten fingers, fully focused upon meter, as a means of writing poetry.
Work Initiatives and Reference Collections
The W3C Activity is the most visible industry-based initiative. For extensive references, see SemanticWeb.org. Maintained by Stefan Decker (Stanford University). SemanticWeb.org is operated by three research groups: The Onto-Agents and Scalable Knowledge Composition (SKC) Research Group at Stanford University, The Ontobroker-Group at the University of Karlsruhe, and The Protégé Research Group at Stanford University.
See: 'Semantic web' initiatives
Design Issues. Architectural and philosophical points. By Tim Berners-Lee. The reference list includes several key papers on RDF and 'The Semantic Web'.
Semantic Web Development. Web page at W3C.
Ontology Interchange Language (OIL)
Resource Description Framework (RDF)
(XML) Topic Maps
XOL - XML-Based Ontology Exchange Language
Simple HTML Ontology Extensions (SHOE)
XML Belief Network File Format (Bayesian Networks)
Predictive Model Markup Language (PMML)
Process Interchange Format XML (PIF-XML)
DARPA Agent Mark Up Language (DAML)
Relational-Functional Markup Language (RFML)
Ontology and Conceptual Knowledge Markup Languages
Case Based Markup Language (CBML)
Artificial Intelligence Markup Language (AIML)
"Business Rules Markup Language (BRML)."
Business Rules for Electronic Commerce. Project at IBM T.J. Watson Research. See also Business Rules Markup Language and 'Agent Communication Markup Language'.
"Description Logics Markup Language (DLML)."
Semantic Web resources. Maintained by Jeff Z. Pan (Department of Computer Science, University of Manchester).
Self Organizing Maps. Pioneered by Teuvo Kohonen. Used to access the Medline database, for example. See Self-Organized Alerting and Search Services and Google.
KBS/Ontology Projects and Groups - References Maintained by Peter Clark
'Semantic Web Agreement Group'
[July 16, 2003] "XML Semantics and Digital Libraries." By Allen Renear (University of Illinois at Urbana-Champaign), David Dubin (University of Illinois at Urbana-Champaign), C. M. Sperberg-McQueen (MIT Laboratory for Computer Science), and Claus Huitfeldt (Department for Culture, Language, and Information Technology, Bergen University Research Foundation). Pages 303-305 (with 14 references) in Proceedings of the Third ACM/IEEE-CS Joint Conference on Digital Libraries (JCDL 2003, May 27-31, 2003, Rice Univerersity, Houston, Texas, USA). Session on Standards, Markup, and Metadata. "The lack of a standard formalism for expressing the semantics of an XML vocabulary is a major obstacle to the development of high-function interoperable digital libraries. XML document type definitions (DTDs) provide a mechanism for specifying the syntax of an XML vocabulary, but there is no comparable mechanism for specifying the semantics of that vocabulary -- where semantics simply means the basic facts and relationships represented by the occurrence of XML constructs. A substantial loss of functionality and interoperability in digital libraries results from not having a common machine-readable formalism for expressing these relationships for the XML vocabularies currently being used to encode content. Recently a number of projects and standards have begun taking up related topics. We describe the problem and our own project... Our project focuses on identifying and processing actual document markup semantics, as found in existing document markup languages, and not on developing a new markup language for representing semantics in general... XML semantics in our sense refers simply to the facts and relationships expressed byXML markup. It does not refer to processing behavior, machine states, linguistic meaning, business logic, or any of the other things that are sometimes meant by 'semantics'. [For example:] (1) Propagation: Often the properties expressed by markup are understood to be propagated, according to certain rules, to child elements. For instance, if an element has the attribute specification lang='de', indicating that the text is in German, then all child elements have the property of being in German, unless the attribution is defeated by an intervening reassignment. Language designers, content developers, and software designers all depend upon a common understanding of such rules. But XML DTDs provide no formal notation for specifying which attributes are propagated or what the rules for propagation are. (2) Class Relationships and Synonymy: XML itself contains no general constructs for expressing class membership or hierarchies among elements, attributes, or attribute values -- one of the most fundamental relationships in contemporary information modeling. (3) Ontological variation in reference: XML markup might appear to indicate that the same thing, is-a-noun, is-a-French-citizen, is-illegible, has-been-copyedited; but obviously either these predicates really refer to different things, or must be given non-standard interpretations. (4) Parent/Child overloading: The parent/child relations of the XML tree data structure support a variety of implicit substantive relationships... These examples demonstrate several things: what XML semantics is, that it would be valuable to have a system for expressing XML semantics, and that it would be neither trivial nor excessively ambitious to develop such a system. We are not attempting to formalize common sense reasoning in general, but only the inferences that are routinely intended by markup designers, assumed by content developers, and inferred by software designers... The BECHAMEL Markup Semantics Project led by Sperberg-McQueen (W3C/MIT) grew out of research initiated by in the late 1990s and is a partnership with the research staff and faculty at Bergen University (Norway) and the Electronic Publishing Research Group at the University of Illinois. The project explores representation and inference issues in document markup semantics, surveys properties of popular markup languages, and is developing a formal, machine-readable declarative representation scheme in which the semantics of a markup language can be expressed. This scheme is applied to research on information retrieval, document understanding, conversion, preservation, and document authentication. An early Prolog inferencing system has been developed into a prototype knowledge representation workbench for representing facts and rules of inference about structured documents."
[November 22, 2002] "The Myths of 'Standard' Data Semantics. Faulty Assumptions Must Be Rooted Out." By William C. Burkett (Senior Information Engineer, PDIT). In XML Journal Volume 3, Issue 11 (November 2002). "Much of the literature heralding the benefits of XML has focused on its application as a medium for application interoperability. With (a) the Internet as a platform, (b) Web services as the functional building block components of an orchestrated application, and (c) XML as a common data format, applications will be able to communicate and collaborate seamlessly and transparently, without human intervention. All that's needed to make a reality is (d) for everyone to agree on and use XML tags the same way so that when an application sees a tag such as <firstName> it will know what it means. This intuitive understanding makes a lot of sense, which is why so many organizations have sprung into existence to create their own vocabularies (sets of tags) to serve as the 'lingua franca for data exchange in <insert your favorite industry, application, or domain>.' This intuitive understanding is so pervasive that it's even a key part of the U.S. GAO recommendations to Senator Joseph Leiberman (chairman of the Committee on Governmental Affairs, U.S. Senate) on the application of XML in the federal government. This report warns of the risk that: <q>...markup languages, data definitions, and data structures will proliferate. If organizations develop their systems using unique, nonstandard data definitions and structures, they will be unable to share their data externally without providing additional instructions to translate data structures from one organization and system to another, thus defeating one of XML's major benefits.</q>. The perspective of these efforts is that the standardization and promotion of the data element definitions and standard data vocabularies (SDV) will solve the application interoperability problem. Unfortunately, this intuitive understanding -- like many intuitive understandings -- doesn't survive the trials of real-life application because important (and seemingly trivial) assumptions are poorly conceived. This article will examine some of these assumptions and articulate several myths of 'standard' data semantics. The notion that data semantics can be standardized through the creation and promulgation of data element names/definitions or vocabularies is based on several assumptions that are actually myths: [1] Myth 1: Uniquely named data elements will enable, or are enough for, effective exchange of data semantics (i.e., information). [2] Myth 2: Uniquely named data elements will be used consistently by everybody to mean the same thing. [3] Myth 3: Uniquely named data elements can exist -- uniquely named as opposed to uniquely identified data elements. Many will readily acknowledge that these are, in fact, myths and that they don't really hold these assumptions. However, it seems that users of namespaces and developers of SDVs and metadata registries are pursuing their work as if these assumptions were true. No mechanisms or strategies have appeared in the extant literature that acknowledge, explain, or address the challenges that arise due to these faulty assumptions. The reasons that these assumptions are faulty fall into the following three areas of SDV development and use: (1) Scope, (2) Natural language use, and (3) Schema evolution... The purpose of this article hasn't been to argue that the problems and the challenges that face the SDV/registry development projects are unsolvable. Rather, it is to suggest that the solution vision must be more expansive. Faulty assumptions must be rooted out, and the problems that are thereby exposed must be explicitly and directly addressed. Despite their intuitive appeal, namespaces, SDVs, registries, and unique data element names will not solve the problem of interoperability. What's needed is the recognition that the semantics of a schema (or, more precisely, the semantics of data governed by a schema) must be explicitly bound to a known community that it serves, and that bridges between the communities will be an inevitable part of any comprehensive solution..." [alt URL]
"I have just completed a general survey of ontology editing software for building ontologies. I am preparing a short report of perhaps 1,500 words that summarizes the use of ontologies in IT solutions to accompany the tabulated findings of the survey. Approximately 50 editing tools were identified and described concisely in 12 categories using editorial input from the tools' suppliers. I believe this represents the largest number of ontology editors ever compiled. While ontologies have a close association with RDF and the Semantic Web, they do not necessarily rely on XML or even Web applications. But the trend in knowledge technology is toward a unification based on XML and the Web..." See following reference. Admin note 2004-01-23: see the more recent version at XML.com.
[November 11, 2002] "Ontology Building: A Survey of Editing Tools." By Michael Denny. From XML.com. November 06, 2002. ['Earlier this year at the WWW2002 conference, there was a surprisingly strong interest in ontologies--structured models of known facts. Ontologies have come out of the research labs and into common use for modeling complex information. Our main feature this week is a survey of tools available for editing ontologies. As part of his survey Michael Denny also provides a great introduction to what ontologies are, how they vary, and how they are constructed.'] "The semantic structuring achieved by ontologies differs from the superficial composition and formatting of information (as data) afforded by relational and XML databases. With databases virtually all of the semantic content has to be captured in the application logic. Ontologies, however, are often able to provide an objective specification of domain information by representing a consensual agreement on the concepts and relations characterizing the way knowledge in that domain is expressed. This specification can be the first step in building semantically-aware information systems to support diverse enterprise, government, and personal activities...In the Semantic Web vision, unambiguous sense in a dialog among remote applications or agents can be achieved through shared reference to the ontologies available on the network, albeit an always changing combination of upper level and domain ontologies. We just have to assume that each ontology is consensual and congruent with the other shared ontologies (e.g., ontologies routinely include one another). The result is a common domain of discourse that can be interpreted further by rules of inference and application logic. Note that ontologies put no constraints on publishing (possibly contradictory) information on the Web, only on its (possible) interpretations... The wide array of information residing on the Web has given ontology use an impetus, and ontology languages increasingly rely on W3C technologies like RDF Schema as a language layer, XML Schema for data typing, and RDF to assert data... The 'Survey of Ontology Editors' covers software tools that have ontology editing capabilities and are in use today. The tools may be useful for building ontology schemas (terminological component) alone or together with instance data. Ontology browsers without an editing focus and other types of ontology building tools are not included. Otherwise, the objective was to identify as broad a cross-section of editing software as possible. The editing tools are not necessarily production level development tools, and some may offer only limited functionality and user support. Concise descriptions of each software tool were compiled and then reviewed by the organization currently providing the software for commercial, open, or restricted distribution. The descriptions are factored into a dozen different categories covering important functions and features of the software... Despite the immaturity of the field, we were able to identify a surprising number of ontology editors -- about 50 overall..."
|








 | Receive daily news updates from Managing Editor, Robin Cover.
  |
|