Archive for the ‘Uncategorized’ Category

ai-one’s Biologically Inspired Neural Network

Sunday, February 1st, 2015

ai-one’s Learning Algorithm: Biologically Inspired Neural Network
– Introduction to HSDS vs ANN in Text Applications

Unlike any of the traditional neural nets, the neural network based on ai-one, the HoloSemantic Data Space neural network (invented by Manfred Hoffleisch) or in short “HSDS”, are massively connected, asymmetrical graphs which are stimulated by binary spikes. HSDS do not have any neural structures pre-defined by the user. Their building blocks resemble biological neural networks: a neuron has dendrites, on which the synapses from other neurons are placed, and an axon which ends in synapses at other neurons.

The connections between the neurons emerge in an unsupervised manner while the learning input is translated into the neural graph structure. The resulting graph can be queried by means of specific stimulations of neurons. In traditional neural systems it is necessary to set up the appropriate network structure at the beginning according to what is to be learned. Moreover, the supervised learning employed by neural nets such as the perceptron requires that a teacher be present who answers specific questions. Even neural nets that employ unsupervised learning (like those of Hopfield and Kohonen) require a neighborhood function adapted to the learning issue. In contrast, HSDS require neither a teacher nor a predefined structure or neighborhood function (note that although a teacher is not required, in most applications programmatic teaching is used to insure the HSDS has learned the content needed to meet performance requirements). In the following we characterize HSDS according to their most prominent features.

Exploitation of context

In ai-one applications like BrainDocs, HSDS is used for the learning of associative networks and feature extraction. The learning input consists of documents from the application domains, which are broken down into segments rather than entered whole: all sentences may be submitted as is or segmented into sub-sentences according to grammatical markers. By way of experimenting, we have discovered that a segment should ideally consist of 7 to 8 words. This is in line with findings from cognitive psychology. Breaking down text documents into sub-sentences is the closest possible approximation to the ideal segment size. The contexts given by the sub-sentence segments help the system learn. The transitivity of term co-occurrences from the various input contexts (i.e. segments) are a crucial contribution to creating appropriate associations. This can be compared with the higher-order co-occurrences explored in the context of latent semantic indexing.

Continuously evolving structure
The neural structure of a HSDS is dynamic and changes constantly in line with neural operations. In the neural context, change means that new neurons are produced or destroyed and connections reinforced or inhibited. Connections that are not used in the processing of input into the net for some time will get gradually weaker. This effect can also be applied to querying, which then results in the weakening of connections that are rarely traversed for answering a query.

Asymmetric connections
The connections between the neurons need not be equally strong on both sides and it is not necessary that a connection should exist between all the neurons (cp. Hopfield’s correlation matrix).

Spiking neurons
The HSDS is stimulated by spikes, i.e. binary signals which either fire or do not. Thresholds do not play a role in HSDS. The stimulus directed at a neuron is coded by the sequence of spikes that arrive at the dendrite.

Massive connectivity
Whenever a new input document is processed, new (groups of) neurons are created which in turn stimulate the network by sending out a spike. Some of the neurons reached by the stimulus react and develop new connections, whereas others, which are less strongly connected, do not. The latter nevertheless contribute to the overall connectivity because they make it possible to reach neurons which could not otherwise be reached. Given the high degree of connectivity, a spike can pass through a neuron several times since it can be reached via several paths. The frequency and the chronological sequence in which this happens determine the information that is read from the net

General purpose
There is no need to define a topology before starting the learning process because the neural structure of the HSDS develops on its own. This is why it is possible to retrieve a wide range of information by means of different stimulation patterns. For example, direct associations or association chains between words can be found, the words most strongly associated with a particular word can be identified, etc.

“Economy Contracts, the Digital Universe Expands”

Tuesday, December 15th, 2009

“At nearly 500 billion gigabytes the Digital Universe, if converted to pages of text and assembled into books, would stretch to Pluto and back 10 times.  At the current growth rate, that stack of books is growing 20 times faster than the fastest rocket ever made.

The Digital Universe is also messy.  Because the image bits account for so much of the total, more than 95% of the data in the Digital Universe is unstructured meaning its intrinsic meaning cannot be easily divined by simple computer programs. ……. The Semantic Web project is promising to develop the tools to help us do that in the future.  “

“The last time we saw a confluence of two such powerful trends – an explosion of new and potentially disruptive technologies and an economic crisis of this magnitude- was before the computer was invented.  The challenge of a lifetime is also the chance of a lifetime.”

See the full report at Digital Universe – Multimedia presentation by EMC/IDC

http://www.emc.com/collateral/demos/microsites/idc-digital-universe/iview.htm

We are now on Twitter!

Tuesday, July 28th, 2009

For those of you who are on Twitter and would like to receive the latest updates on semantic system, our ai-one technology, and other relevant AI news, follow us at:

http://twitter.com/ai_one

http://twitter.com/semanticsystems

Walter Diggelmann’s Presentation at UC San Diego Posted on YouTube

Thursday, July 23rd, 2009

Walter Digglemann’s presenation at UC San Diego on July 20, 2009 is posted on YouTube! Please click here to access the presentation on ai-one and its applications in research and in various industries.

Walter Presents at UCSD

Walter Presents at UCSD

Walter Presents at UCSD

Walter Presents at UCSD

semantic system ag at the University of California, San Diego

Saturday, May 16th, 2009

UC San Diego invited semantic system ag to deliver a speech to the UCSD students and professors. At the talk, semantic introduced ai-one™. The two parties agreed on a future collaboration and are looking forward to helping semantic system establish a base in San Diego.

Succsess in San Diego

Saturday, May 16th, 2009

system ag is extending the visit in San Diego after successfully presenting at the RED HERRING award North America TOP 100.

The request for more intelligence in computing is huge. semantic system is invited to present the technology at the annual conference of the security network org. in San Diego. A very well known incubators organization and event where new technologies are presented to large corporations and to the Government.