Archive for the ‘Research Related to Biological Intelligence’ Category

The State of A.I. and Switzerland

Tuesday, June 28th, 2016

the-state-of-artificial-intelligence-in-15-visuals-1050x580As you know, Artificial intelligence, or AI, has been a part of our world at ai-one since our founding in 2003.  Don’t be confused by the latest buzzwords, deep learning, machine learning, artificial intelligence and biologically inspired intelligence (our Nathan) are all part of the field of A.I.    It’s a hot subject now but the languages, techniques and algorithms have been around for decades, often as part of an application (Google search and map directions for example).

In order to become it’s own industry A.I. needs a large number of companies, money and its own problems to solve.  This is the real news of the last few years and Max Wegner and our friends at have created “State of Artificial Intelligence Infographic” to tell the story.

Today A.I. is a significant and growing sector of the technology industry; billions of dollars are invested in new AI developments, and companies around the world are working on new AI applications as you read this. And while most AI companies have been in existence for less than a decade (the average age is around four or five years), the tech behind AI is evolving, and the role of AI in our lives in the coming years is all but certain to grow.

One of the surprises in the report is the ranking of Switzerland as the second largest location in the world by amount of VC funding received and third by number of companies.  With almost all of the companies less than 10 years old, ai-one was clearly early, before the cloud and big data brought in the new era. Our biologically inspired intelligence is another differentiation from all but a few of these companies.

It is a new era and with all the competition comes the demand from the business community to make significant investments in A.I. powered applications.  We see the change in the character of the inbound leads from our website.  In the past those inquiries were from PhDs, engineers and startups where today they are almost exclusively from product managers at larger enterprises.  This is the type of demand that will drive growth and we’re excited to be in this space.

If you want to put see what our AI can do for your enterprise, please connect.


ai-one’s Biologically Inspired Neural Network

Sunday, February 1st, 2015

ai-one’s Learning Algorithm: Biologically Inspired Neural Network
– Introduction to HSDS vs ANN in Text Applications

Unlike any of the traditional neural nets, the neural network based on ai-one, the HoloSemantic Data Space neural network (invented by Manfred Hoffleisch) or in short “HSDS”, are massively connected, asymmetrical graphs which are stimulated by binary spikes. HSDS do not have any neural structures pre-defined by the user. Their building blocks resemble biological neural networks: a neuron has dendrites, on which the synapses from other neurons are placed, and an axon which ends in synapses at other neurons.

The connections between the neurons emerge in an unsupervised manner while the learning input is translated into the neural graph structure. The resulting graph can be queried by means of specific stimulations of neurons. In traditional neural systems it is necessary to set up the appropriate network structure at the beginning according to what is to be learned. Moreover, the supervised learning employed by neural nets such as the perceptron requires that a teacher be present who answers specific questions. Even neural nets that employ unsupervised learning (like those of Hopfield and Kohonen) require a neighborhood function adapted to the learning issue. In contrast, HSDS require neither a teacher nor a predefined structure or neighborhood function (note that although a teacher is not required, in most applications programmatic teaching is used to insure the HSDS has learned the content needed to meet performance requirements). In the following we characterize HSDS according to their most prominent features.

Exploitation of context

In ai-one applications like BrainDocs, HSDS is used for the learning of associative networks and feature extraction. The learning input consists of documents from the application domains, which are broken down into segments rather than entered whole: all sentences may be submitted as is or segmented into sub-sentences according to grammatical markers. By way of experimenting, we have discovered that a segment should ideally consist of 7 to 8 words. This is in line with findings from cognitive psychology. Breaking down text documents into sub-sentences is the closest possible approximation to the ideal segment size. The contexts given by the sub-sentence segments help the system learn. The transitivity of term co-occurrences from the various input contexts (i.e. segments) are a crucial contribution to creating appropriate associations. This can be compared with the higher-order co-occurrences explored in the context of latent semantic indexing.

Continuously evolving structure
The neural structure of a HSDS is dynamic and changes constantly in line with neural operations. In the neural context, change means that new neurons are produced or destroyed and connections reinforced or inhibited. Connections that are not used in the processing of input into the net for some time will get gradually weaker. This effect can also be applied to querying, which then results in the weakening of connections that are rarely traversed for answering a query.

Asymmetric connections
The connections between the neurons need not be equally strong on both sides and it is not necessary that a connection should exist between all the neurons (cp. Hopfield’s correlation matrix).

Spiking neurons
The HSDS is stimulated by spikes, i.e. binary signals which either fire or do not. Thresholds do not play a role in HSDS. The stimulus directed at a neuron is coded by the sequence of spikes that arrive at the dendrite.

Massive connectivity
Whenever a new input document is processed, new (groups of) neurons are created which in turn stimulate the network by sending out a spike. Some of the neurons reached by the stimulus react and develop new connections, whereas others, which are less strongly connected, do not. The latter nevertheless contribute to the overall connectivity because they make it possible to reach neurons which could not otherwise be reached. Given the high degree of connectivity, a spike can pass through a neuron several times since it can be reached via several paths. The frequency and the chronological sequence in which this happens determine the information that is read from the net

General purpose
There is no need to define a topology before starting the learning process because the neural structure of the HSDS develops on its own. This is why it is possible to retrieve a wide range of information by means of different stimulation patterns. For example, direct associations or association chains between words can be found, the words most strongly associated with a particular word can be identified, etc.

AI, AGI, ASI, Deep Learning, Intelligent Machines.. Should you worry?

Saturday, January 17th, 2015

If the real life Tony Stark and technology golden boy, Elon Musk, is worried that AI is an existential threat to humanity, are we doomed? Can mere mortals do anything about this when the issue is cloaked in dozens of buzzwords and the primary voices on the subject are evangelists with 180 IQs from Singularity University? Fortunately, you can get smart and challenge them without a degree in AI from MIT.

There are good books on the subject. I like James Barrat’s Our Final Invention and while alarmist, it is thorough and provides a guide to a number of resources from both sides of the argument. One of those was the Machine Intelligence Research Institute (MIRI) founded by Eliezer Yudkowsky. This book was recommended on the MIRI website and is a good primer on the subject.

Smarter Than Us by Stuart ArmstrongSmarter Than Us – The Rise of Machine Intelligence by Stuart Armstrong can also be downloaded at iTunes.

“It will sharpen your focus to see AI from a different view. The book does not provide a manual for Friendly AI, but its shows the problems and it points to the 3 critical things needed. We are evaluating the best way for ai-one to participate in the years ahead.” Walt Diggelmann, CEO ai-one.

In Chapter 11 Armstrong recommends we take an active role in the future development and deployment of AI, AGI and ASI. The developments are coming; the challenge is to make sure AI plays a positive role for everyone. A short summary:

“That’s Where You Come In . . .

There are three things needed—three little things that will make an AI future bright and full of meaning and joy, rather than dark, dismal, and empty. They are research, funds, and awareness.

Research is the most obvious.
A tremendous amount of good research has been accomplished by a very small number of people over the course of the last few years—but so much more remains to be done. And every step we take toward safe AI highlights just how long the road will be and how much more we need to know, to analyze, to test, and to implement.

Moreover, it’s a race. Plans for safe AI must be developed before the first dangerous AI is created.
The software industry is worth many billions of dollars, and much effort (and government/defense money) is being devoted to new AI technologies. Plans to slow down this rate of development seem unrealistic. So we have to race toward the distant destination of safe AI and get there fast, outrunning the progress of the computer industry.

Funds are the magical ingredient that will make all of this needed research.
In applied philosophy, ethics, AI itself, and implementing all these results—a reality. Consider donating to the Machine Intelligence Research Institute (MIRI), the Future of Humanity Institute (FHI), or the Center for the Study of Existential Risk (CSER). These organizations are focused on the right research problems. Additional researchers are ready for hire. Projects are sitting on the drawing board. All they lack is the necessary funding. How long can we afford to postpone these research efforts before time runs out? “

About Stuart: “After a misspent youth doing mathematical and medical research, Stuart Armstrong was blown away by the idea that people would actually pay him to work on the most important problems facing humanity. He hasn’t looked back since, and has been focusing mainly on existential risk, anthropic probability, AI, decision theory, moral uncertainty, and long-term space exploration. He also walks the dog a lot, and was recently involved in the coproduction of the strange intelligent agent that is a human baby.”

Since ai-one is a part of this industry and one of the many companies moving the field forward, there will be many more posts on the different issues confronting AI. We will try to keep you updated and hope you’ll join the conversation on Google+, Facebook, Twitter or LinkedIn. AI is already pervasive and developments toward AGI can be a force for tremendous good. Do we think you should worry? Yes, we think it’s better to lose some sleep now so we don’t lose more than that later.


(originally posted on

Songbirds use grammar rules

Thursday, August 11th, 2011

Researchers have found that songbirds have something that resembles grammar as we know and are very responsive to rule violation. The birds have a syntax in their tweets, maybe not the same concepts like us (that is nouns, pronouns, verbs, adjectives, adverbs and so on), but they have a syntactic structure. Syntax is the study of principles and rules for constructing sentences and grammar rules are a part of syntax.

Language is made up of signs, meanings and a code connecting signs with their meanings. Semiotics is the study that looks at how signs and meanings are combined, used and interpreted ( you can read it up in our paper Semiotics and Intrinsic Semantics).

The research findings are published in Nature and NewScientist.

Browser extension Hyperwords

Wednesday, July 27th, 2011

The browser plugin Hyperwords is based on the research of Doug Engelhbart and turns words and numbers into hyperlinks. Lets first take a look at how to use Hyperwords.




Hyperwords not only lets you jump to another page or another web site (like “normal” hyperlinks). Instead Hyperwords lets you interact with the word in several ways. After selecting text, a small blue ball and then a pop-up menu appear, offering reference, sharing and (currency) conversion, even translation options. These options can be customised and expanded to your taste.

Hyperwords allows us to set in context what we read, associate the words (and numbers) just like in an associative network (See Prof. Dr. Ulrich Reimer explanation of Lightweight Ontologies LWO).

You can download and install Hyperwords on Firefox, Chrome or Safari. Head over to