Posts Tagged ‘dynamic topology’

The Singularity Just Got A Lot Closer

Thursday, June 2nd, 2011

New tool allows programmers to build artificial intelligence into almost any software application.

SDK for Machine Learning

A new technology enables almost any application to learn like a human. The Topic-Mapper software development kit (SDK) by ai-one inc. reads and understands unstructured data without any human intervention. It allows developers to build artificial intelligence into almost any software program. This is a major step towards what Ray Kurzweil calls the technological singularity – where superhuman intelligence will transform history.

Unlike other machine learning approaches, ai-one’s technology extracts the inherent meaning of data without the need for any external references. A team of researchers spent more than eight years and $6.5 million building what they call “biologically inspired intelligence“ that works like a brain. It learns patterns by reading data at the bit-level. “It has no preconceived notions about anything,” explains founder Walt Diggelmann, “so it works in any language and with any data set. It simply learns what you feed it. The more it reads, the more it learns, the better it gets at recognizing patterns and answering questions.”

Technical Advances

Lightweight Ontologies (LWO)

The technology incorporates two major technical advances: First, it automatically creates what ai-one describes as a “lightweight ontology” (LWO), The system determines the relationships between data elements as they are fed into the system. The primary benefit of LWO is that it is completely objective — it makes associations without editorial (human) bias. LWOs are also very adaptive, automatically recalculating when ingesting new data. Unlike traditional ontologies, LWOs require no maintenance.

Dynamic Topologies

Second, ai-one’s technology generates “dynamic topologies” that transforms the data structure to find the best answer to any question. The benefits of dynamic topologies include incremental learning – the system buy sertraline hcl online gets smarter as it is exposed to more questions. Moreover, it is able to deal with ambiguity and unknown situations. The result is that the system can answer the questions that a person wouldn’t normally know to ask.

The SDK opens the door for many new, disruptive software applications. For example, it can replace search algorithms with more accurate “answer engines” that deliver the most precise answer to any question.

“We offer a core programming technology,” says Tom Marsh, President and COO of ai-one. “The possibilities are almost endless. Our business model is to license the SDK to software developers to build end-user applications. Our goal is to get Topic-Mapper to as many well-qualified programmers as possible and let the creativity of the market take over.”

Adoption has been quick.

The first version of the SDK was released in February 2011. In less than three months, more than 20 consulting partners signed up to use the technology to build commercial applications – mostly in Europe. Swissport matches passenger manifests against the US Department of Homeland Security’s No-Fly List. The core technology is used by Swiss law enforcement CSI labs to match shoeprints and other evidence from multiple crime scenes. Most recently, ai-ibiomics announced it will use ai-one’s SDK to read genome sequences to provide personalized medical services in Germany.

A logical next step is for the technology is to enable eCommerce, social media and other online applications to provide end-users with the most relevant, most accurate information for any given situation.

ai-one will be showcasing the technology at booth #107 during the SemTech 2011 conference on June 7-8. Developers can request a 30-day evaluation copy online at

Lightweight Ontologies (LWO) versus Full-Fledged Ontologies

Tuesday, May 31st, 2011

Prof. Dr. Ulrich Reimer of University of Konstanz and University of Applied Sciences St. Gallen Institute for Information and Process Management explains the value of lightweight ontologies.

1. What are ontologies and what are they good for?

Originally, the term ontology means a philosophic discipline that is concerned with the study of the nature of being and existence as well as the basic categories of being and the relations among them. In computer science the term ontology stands for an engineering artefact and thus has a quite different meaning:

Definition: An ontology is a formal representation of concepts in a domain of discourse and the relationships between those concepts.

An ontology can therefore serve as a shared vocabulary when:

  • Information systems need to exchange information among each other and therefore need a common basis for denominating objects in the domain.
  • People wish to share information objects among each other and therefore need a common vocabulary to characterize the objects so that they can be more easily retrieved and shared.
  • Knowledge-based systems need to reason about entities within a given domain using terminological reasoning to diagnose malfunctioning devices, design and configure complex systems, understand natural language texts, etc.

The definition of an ontology leaves it open what exactly “a formal representation of concepts in a domain of discourse” means. It is meanwhile standard to use description logics (a subset of first-order logic) to formally represent an ontology. Current ontology languages like OWL and (with some restrictions) RDF Schema are based on such description logics. In practice, however, ontologies are sometimes informally represented, e.g. by a graph. In that case their correct interpretation by a computer is not granted and even worse, they cannot be shared freely between applications.

2. Ontologies have varying degrees of expressiveness

The level of detail in which the concepts in an ontology are represented can vary quite considerably. In the simplest case an ontology is just a taxonomy (or concept hierarchy: see Fig.1). ? Concept hierarchy (taxonomy)

Concepts can be represented in more detail by stating additional relationships between concepts as well as properties all instances of a concept have (see Fig.2).Concept hierarchy with additional relationships

Going even further, relationships between concepts can be said to have certain properties (e.g. being transitive like the part-of relation), to fulfill certain cardinality restrictions (to state that an airplane has two wings), to be not fulfilled (to state that a bachelor does not have a relationship “being-married” to a female person), etc.

2.1 Lightweight ontologies

Ontologies with restricted expressiveness, like taxonomies (cf. Fig.1), are sometimes called light-weight ontologies (LWO). A lightweight ontology can also mean a collection of concepts which are related with each other via associations that are untyped and do not specify of what kind the relationship is (cf. Fig.3). Typically, a numerical weight between 0 and 1 is assigned to the associations, indicating their semantic strength (or semantic nearness of the related concepts). These kinds of lightweight ontologies are also called associative networks.

In the following we will focus on lightweight ontologies of the latter kind:

Definition: A lightweight ontology (or associative network or LWO) is a directed graph whose nodes represent concepts. The links between the nodes indicate associations (or untyped relationships) between the corresponding concepts. The associations express semantic nearness. An association between two concept nodes is labelled with an association strength between 0 and 1.

Lightweight ontology (associative network)

Lightweight ontologies are sufficient for many kinds of applications, especially in the area of information retrieval where typed relationships between concepts are not really needed:

  • Query extension: There is a huge gap between a user’s information need and its transformation into an appropriate query for obtaining the relevant information. It can be quite cumbersome to find the needed information because there may be many ways to refer to a particular concept (e.g. “MSD”, “musculoskeletal disorder”, “lower back pain”). A lightweight ontology which relates semantically similar concepts with each other enables a search engine to extend a query to include additional, related concepts. For example, entering the search term “life style” would also retrieve documents that contain the words “nutrition” or “physical exercise” if the underlying ontology contains the proper relations between these terms (cf. Fig.3). Query extension introduces an independence from actual words occurring in a document or in a query. This is sometimes called concept-based or content-oriented retrieval (as opposed to word-based retrieval).
  • Document categorization / document clustering: Rules for categorising text documents into predefined categories typically refer to the words occurring in the documents. A lightweight ontology as background knowledge introduces an independence from concrete wording as discussed above for query expansion. Similarly, lightweight ontologies can improve document clustering.
  • Tag cloud generation: By using a lightweight ontology the concepts most strongly related to a query term can be shown as a tag cloud (cf. Fig.4). The font size of the tags in the cloud and their closeness to the query term correspond to association strength. A tag cloud:
    • helps the user to get a better understanding of the underlying domain and thus of his or her information need and how to properly express it;
    • allows a user diflucan without rx explore the term space defined by the lightweight ontology and thus to improve his or her understanding of the underlying domain;
    • allows a user reformulate or extend the original query by selecting terms from the tag cloud.

Since lightweight ontologies can be constructed automatically from text documents (see Sec.3) they can also play an important role in the first steps of building more detailed knowledge models. For example:

  • Building a simulation model might start with learning a lightweight ontology from relevant text documents, which gives an initial account of the relevant concepts to consider and how they are associated with each other.
  • Defining a mapping between the schemas of two different data sources might begin with learning a lightweight ontology from text documents as well as from already existing ontologies and thesauri.

Tag cloud derived from the lightweight ontology in Fig.3

2.2 Full-fledged ontologies

Ontologies with a richer structure, i.e. consisting of a taxonomy and additional relationships between concepts, are in the following called full-fledged ontologies. They can be used whenever a more detailed conceptual model of a domain of discourse is needed:

  • Software engineering: An ontology provides a formal representation of the relevant con-cepts in the domain of interest together with their attributes and inter-relationships. Due to the formal representational basis of description logics a computer can perform formal reasoning on the ontology and check it for consistency and compliance with business logic. Moreover, the ontology can be automatically translated into a component of the target software system. Often UML class diagrams are used in software engineering. Although UML class diagrams qualify as ontologies in an informal way they are not based on any representation formalism and therefore do not facilitate consistency checks or automatic translation.
  • Interoperability: The semantic interoperability of application systems requires either a com-mon data schema or a mapping between the data schemas. In order to keep the actual data schemas hidden an ontology can serve as an interchange format that provides a neutral representation of the kinds of data objects involved, their attributes and inter-relationships. Each application system needs only to map to this interchange ontology in order to communicate with other application systems.
  • Information extraction from texts: Automatically extracting facts from text documents not only requires natural language understanding capabilities but also an ontology that provides the necessary background knowledge and the schemata into which the facts are extracted. For example, for extracting facts from life science documents the relationships between proteins and (areas on) genomes might be relevant and have to be encoded in the ontology.

3. Where do the ontologies come from?

Ontologies can be obtained in one of the following ways, or a combination of them:

  • manual building,
  • reuse of existing ontologies,
  • automatically learning ontologies from text documents,
  • extending an existing ontology by social tagging.

Full-fledged ontologies can only be built manually, possibly reusing parts of already existing ones. Automatically learning a full-fledged ontology from text documents is subject to ongoing research and not practically feasible at the moment.

As opposed to full-fledged ontologies, lightweight ontologies can be automatically learned from text documents. This opens up huge opportunities whenever:

  1. a lightweight ontology is sufficient for the application (as for most information retrieval sce-narios), and/or
  2. complex models need to be built (such as full-fledged ontologies, schema mappings, simulation models) : Instead of starting from scratch an initial lightweight ontology is learned to get hints as to what concepts to consider in the final models. This is very helpful because in the beginning it is often only partially known what the relevant domain concepts are.

4. Learning lightweight ontologies with ai-one

There exist many approaches to learning lightweight ontologies from text documents. A recent approach is based on a biologically inspired neural network (BINN) and the associated learning algorithm provided by the company ai-one™. This approach has considerable advantages over other approaches (see Reimer et al 2011 for details):

  • Higher relevance: The learned associations between concepts are more relevant (as judged by domain experts) than those of other approaches.
  • Directed associations: Most classical approaches yield symmetric associations between con-cepts, while target applications (e.g. query extension) often need asymmetric (or directed) associations. Learning a lightweight ontology with a BINN is one of the few approaches that results in directed associations.
  • Speed: Building association nets with a BINN is magnitudes faster than with other approaches.
  • Incremental learning: Due to the nature of a BINN, the learning of lightweight ontologies is incremental, i.e. can be continued any time when further input documents are available. This is not possible with most other approaches, which have to start from scratch again when new learning input is to be considered.
  • Evolving domains: Due to the support of incremental learning it is possible to take account of evolving domains when using a BINN for learning.
  • Small learning input: Unlike other approaches, learning a lightweight ontology with a BINN already delivers reasonable associations from a very small number of input texts.

U. Reimer, E. Maier, S. Streit, T. Diggelmann, M. Hoffleisch: Learning a Lightweight Ontology for Semantic Retrieval in Patient-Centered Information Systems. In: Int. Journal of Knowledge Management, Vol. 7, No.3, 2011.