SDDT recognizes ai-one’s presentation at CommNexus to SK Telecom of North Korea

June 27th, 2013

ai-one was recognized for its participation in the CommNexus MarketLink event June 4th in San Diego California. The event featured companies from all across the US selected by SK Telecom for their potential to add value to SK Telecom’s network. The meeting was also attended by SK’s venture group based in Silicon Valley.
Tierney Plumb of the San Diego Daily Transcript reported, “San Diego-based ai-one inc. pitched its offerings Tuesday to the mobile operator. The company, which has discovered a form of biologically inspired neural computing that processes language and learns the way the brain does, was looking for two investments — each about $3 million — from SK. One is a next-generation Deep Personalization Project whose goal is to create an intimate personal agent while providing the user with total privacy control. ”
For the full text of this article click  San Diego Source _ Technology _ Startups line up to meet with SK Telecom

Collaboration, Artificial Intelligence and Creativity

April 4th, 2013

We are thrilled to publish this guest blog by Dan Faggella – a writer with a focus on the future of consciousness and technology. ai-one met Dan online through his interest in the beneficial developments of human and volitional (sentient) potential.  Dan is national martial arts champion in Brazilian Jiu Jitsu and Masters graduate from the prestigious Positive Psychology program at the University of Pennsylvania. His eclectic writings and interviews with philosophers and technology experts can be found online at

Artificial Intelligence as a Source for Collaboration

At a recent copywriting event in Las Vegas, I heard a nationally renown writer of sales letters and magazine ads mention something that resonated with me. He said that copywriters are generally isolated people who like to work at him on a laptop, not in a big room with other people, or in a cubicle in an office – but that some of the absolute best ad agencies were getting their best results by “forcing” (in his words) their best copywriters to work together on important pitches and sales letters – delivering a better product than any of them could have alone.

Some people in the crowd seemed surprised, and the copywriter on stage mentioned that many “a-list” copywriters tend to think that their creativity and effectiveness will be stifled by the pandering to the needs of other writers, or arguing over methods and approaches to writing. In my opinion, however, this notion of the “genius of one” is on the way out, even in fields where creativity rules.

If we take the example of sports, the need for feedback and collaboration is for some reason more obvious. A professional football team does not have one genius coach, they have offensive, defensive, and head coaches with teams of assistant coaches. In addition, top athletes from basketball to wrestling to soccer are usually eager to play with and against a variety of teammates and opponents in order to broaden their skills and test their game in new ways. The textbooks on the development of expertise are full of examples from the world of sport; especially pertaining to feedback, coaching, and breaking from insularity.

The focus of my graduate studies at UPENN was in the domain of skill development, where the terms “feedback” (perspective and advice from experts outside oneself) and “insularity” (a limited scope of perspective based on an inability or unwillingness to seek out or take in the perspective of other experts) are common. In sport, insularity is clearly seen as negative. However, in literature or philosophy, it seems that the “genius of one” still seems to reign.

Why might this be the case, when in so many other fields (chess, sports, business, etc…) we se collaboration proliferated? I believe that the answer to this question lies partially in the individual nature of these fields, but that new approaches in collaboration – and particularly new applications of artificial intelligence – will eventually break down the insularity in these and many other “creative” fields.

What is Creativity & Collaboration All About, Anyway?

Creativity, in short, is the ability to create, or to bend rules and convention in order to achieve an end. Collaboration is working jointly on a project. Both, in my mind, imply the application of more intelligence to a particular problem.

Just as three top copywriters can put together a better sales letter (generally) than one copywriter, three top chess players are more likely to defeat a computer chess program (generally) than one top chess player alone.

Technology allows us to bring more to bare when it comes to applying intelligence. Even in the relatively simple task of putting together this article, I am able to delete, reorganize, link, and research thanks to my laptop and the internet. I bring more than my brain and a pen on paper could do alone. I may not be “collaborating,” but I am applying and the information and research of others to my own work in real time.

Artificial intelligence ads an entirely new level of “applied intelligence” to projects that may extend beyond what internet research and human collaboration could ever achieve. For our purposes today, the progression of “less” to “more” applied intelligence will be: working alone, working with others, working with others and researching online, and applying artificial intelligence. We already have tremendous evidence of this today in a vast number of fields.

Applications Already Underway

I will argue that, in general, collaboration and the application of artificial intelligence will be prevalent in a field based primarily on: the competitiveness of that field (in sports and business, for instance, competition is constant, and so testing and evaluating can be constant), popularity / perceived importance of the field (trivial matters rarely hold the attention of groups of smart people, and are even less likely to garner grants or resources), and the lucrative-ness of that field (such as finance).

In finance, for example, the highly competitive, the highly lucrative and high-speed work of number-crunching and pattern-recognition has been one of the most prominent domains of AI’s applications. Not only are human decisions bolstered by amazingly complex real-time data, but many “decisions” are no longer made by humans at all, but are completely or mostly automated based on streaming data and making sense of patterns. It is estimated that nearly 50% of all trades in American and European markets are made automatically – and are likely to increase.

Anyone who’s visited, Google, or Facebook knows that advertisements or promoted products are calibrated specifically to each user. This is not done by a team of guessing humans, individually testing ads and success rates, but is performed by intelligent, learning algorithms that use massive amounts of data can i buy sertraline online from massive numbers of users (including data from off of their own sites) to present the advertisements or products more likely to generate sales.

The above applications seem like obvious first applications of the expensive technologies of AI because of the amount of money involved, and the necessity for businesses to stay ahead in a competitive marketplace (generating maximum revenue, giving customers offers that they want, etc…). Implications have already been seen in sports, with companies like Automatic Insights providing intelligent sports data and statistics in regular, human language in real time. My guess is that in the big-money world of professional sport, even this kind of advanced reporting will only be the very tip of the iceberg.

However, the implications will soon also reverberate into the worlds of more “complex” systems of meaning, as well as fields where the economic ramifications are less certain. I believe that the humanities (poetry, literature, philosophy) will see a massive surge of applied intelligence that will not only break the mold of the “genius of one,” but will also open doors to all of the future possibilities of AIs contributing to “creative” endeavors.

Future Implications of AI in “Creative” Fields / The Humanities

It seems perfectly reasonable that more applications for AI have been found in the domain of finance than in the domain of philosophy or literature. Finance involves numbers and patterns, while literature involves more complex and arbitrary ideas of “meaning” and a system of much more complicated symbols.

However, I must say that I am altogether surprised with the fact that there seems to be very little application of AI to the domain of the humanities. In part, I believe this to be a problem of applying AI to complex matters of “meaning” and subjective standards of writing quality (there is not clear “bottom line” as there is in finance), but the notion of the “genius of one” invariably plays a part in this trend as well, as even collaboration among humans (never mind collaboration with an AI) is often comparatively limited in these fields.

Not being an novelist, I can hardly say that if writers collaborated with other expert writers more often, they would create “better” overall works. I have an inkling, however, that this might be the case.

In the world of psychology, I believe that outside the desire to “hog the glory,” expert researchers would almost certainly take on the opportunity to collaborate on their most important projects with other expert researchers in the field. In the world of flowing data streams, applying AI and statistical models might also seem more applicable.

In philosophy – where works are generally still seen to be completed by lone, pensive thinkers in dark, pensive rooms – I believe that collaboration and AI will eventually transcend the “genius of one,” and rid us of the notion that the best work is done by solo minds.

If one philosopher spent 12 months aiming to compare and find connections between the ethics of Aristotle and Epictetus, I would argue that 12 very smart philosophers working together for 12 months might achieve much more insight.

Similarly, if intelligent algorithms could be created that could detect commonalities in terms, symbols, and meanings – entirely new connections and insights might be made possible, and much more vast reams of philosophical text could be analyzed in a much more uniform fashion – producing an objective perspective completely unattainable to human beings without an AI aide. I believe that this is already possible, though it’s applications in philosophy and the humanities in general seem almost nonexistent outside of a few events and experiments.

I believe very much in the power of the individual mind, and mean no disrespect to human capacity or to individual thinkers when I say that the era of the “genius of one” is going to progressively evaporate. In 1920, you might be able to win the Nobel Prize in your 40’s with a small team of researchers. In 2020, you’re more likely to win the Nobel Prize in your 60’s with a global research team that’s been hard at work for decades. Even the more “creative” domains of the humanities will experience a similar shift as collaboration becomes more common, research becomes more simple, and intelligence becomes more and more prevalent and nuanced.

Conclusion: Robot Shakespeare?

It is interesting to pose that at some point – potentially within this century, the best prose, the best novels, and the best philosophical insight will come not from individual geniuses, not even from teams of researchers, but almost entirely from AI.

This is not to say that I believe a “robot Shakespeare” will be in our midst anytime soon – but rather that we aught keep our minds open to the idea of AI being something other than calculators and cars that drive themselves. The nuanced connections of meaning can already be used to supplement human efforts with insights in so many domains, an in a period of 20, 40, or 60 years, we may see all elements of human capacity (not just statistical number-crunching) enhanced a billion-fold by AI’s of the future.

The ethical, political, and other implications aside, let us keep our eyes open for the implications of applied intelligence across all fields of human endeavor. We may question technology’s ability to contribute, but remember that it was less than 70 years between the early flights of the Wright brothers and landing on the moon. Might we seem a similar time frame between the advent of Amazon’s intelligent product offers and the replacement of humans at the helm of creative endeavor in writing, philosophy, poetry, and beyond. Only time will tell.

Thinking forward,

-Daniel Faggella

Our First Annual Keynote Conference: Nathan’s Birthday Party

February 5th, 2013

ai-one will announce the birth of a completely new software technology, Nathan, at our first keynote conference on February 27, 2013 at 1830 in Olten, Switzerland. This is a public event that is open to the media, investors, entrepreneurs and anyone interested in our machine learning technology.

Our founder and CEO, Mr. Walt Diggelmann, will be the keynote speaker. He will introduce Nathan to the world for the first time. Nathan is a revolutionary machine learning technology that has the potential to disrupt the way we use and develop software. Unlike other forms of machine learning, Nathan works like the human brain and can be used anywhere on almost any device. Nathan is the next generation of ai-one’s biologically inspired intelligence and is the culmination of more than 10 years of research and development. In addition, our COO Mr. Tom Marsh will update the audience on how ai-one will bring Nathan to market and our progress developing prototypes to prove the value of Nathan.

The keynote will be held at Weltbild Verlag GmbH, Industriestrasse 78, 4609 Olten, Switzerland. Tickets and reservations are available online at

Posted by: Olin Hyde

Big Data Solutions: Intelligent Agents Find Meaning of Text

January 18th, 2013


ai-BrainDocs AgentWhat if your computer could find ideas in documents? Building on the idea of fingerprinting documents, ai-one helped develop ai-BrainDocs – a tool to mine large sets of documents to find ideas using intelligent agents. This solves a big problem for knowledge workers: How to find ideas in documents that are missed by traditional keyword search tools (such as Google, Lucine, Solr, FAST, etc.).

Customers Struggle with Unstructured Text

Almost every organization struggles to find value in “big data” – especially ideas buried within unstructured text. Often a very limited set of vocabulary can be used to express very different ideas. Lawyers are particularly talented at this: They can use 100 unique words to express thousands of ideas by simply changing the ordering and frequencies of the words.

Lawyers are not the only ones that need to find ideas inside documents. Other use cases include finding and classifying complaints, identifying concepts within social media feeds such as Twitter or Facebook and mining PubMed find related research articles. Recently, we have had several healthcare companies contact us to mine electronic health records (EHR) data to find information that is buried within doctors notes so they can predict adverse reactions, find co-morbidity risks and detect fraud.

The common denominator for all these uses cases is simple: How to find “what matters most” in documents? They need a way to find these ideas fast enough to keep pace with the growth in documents. Given that information is growing at almost 20% per year – this means that a very big problem now will be enormous next year.

Problems with Current Approaches

We’ve heard numerous stories from customers who were frustrated at the cost, complexity and expertise required to implement solutions to enable machines to read and understand the meaning of free-form text. Often these solutions use latent semantic indexing (LSI) and latent Dirichlet allocation (LDA). In one case, a customer spent more than two years trying to combine LSI with a Microsoft FAST Enterprise search appliance running on SharePoint. It failed because they were searching a high-volume of legal documents with very low variability. They were searching legal contracts to find paragraphs that included a very specific legal concept that could be expressed with many different combinations of words. Keyword search failed because the legal concept used commonly used words. LSI and LDA failed because the systems required a very large training set — often involving hundreds of documents. Even after reducing the specificity requirements, LSI and LDA still failed because they could not find the legal ideas at the paragraph level.


We found inspiration in the complaints we heard from customers: What if we could build an “intelligent agent” that could read documents like a person? We thought of the agent as an entry-level staff person who could be taught with a few examples then highlight paragraphs that were similar to (but not exactly like) the teaching examples.

Solution: Building Intelligent Agents

For several months, we have been developing prototypes of intelligent agents to mine unstructured text to find meaning. We built a Java application that combine ai-one’s machine learning API with natural language processing (OpenNLP) and NoSQL databases (MongoDB). Our approach generates an “ai-Fingerprint” that is a representational model of a document using keywords and association words. The “ai-Fingerprint” is similar to a graph G[V,E] where G is the knowledge representation, V (vertices) are keywords, and E (edges) are associations. This can also be thought of as a topic model.

ai-FingerprintThe ai-Fingerprint can be generated for almost any size text – from sentences to entire libraries of documents. As you might expect, the “intelligence” (or richness) of the ai-Fingerprint is proportional to the size of text it represents. Very sparse text (such as a tweet) has very little meaning. Large texts, such as legal documents, are very rich. This approach to topic modelling is precise — even without training or using external ontologies.

[NOTE: We are experimenting with using ontologies (such as OWL and RDF) as a way to enrich ai-Fingerprints with more intelligence. We are eager to find customers who want to build prototypes using this approach.]

The Secret Sauce

The magic is that ai-one’s API automatically detects keywords and associations – so it learns faster, with fewer documents and provides a more precise solution than mainstream machine learning methods using latent semantic analysis. Moreover, using ai-one’s approach makes it relatively easy for almost any developer to build intelligent agents.

How to Build Intelligent Agents?

To build an intelligent agent, we first had to consider how a human reads and understands a document.

The Human Perspective

Human are very good at detecting ideas – regardless of the words used to express them. As mentioned above, lawyers can express dozens of completely different legal concepts with a vocabulary of just a few hundred words. Humans can recognize the subtle differences of two paragraphs by how a lawyer uses words – both in meaning (semantics) and structure (syntax). Part of the cleverness of a lawyer is finding ways to combine as few words as possible to express a very precise idea to accomplish a specific legal or business objective. In legal documents, each new idea is almost always expressed in a paragraph. So two paragraphs might have the exact same words but express completely different ideas.

To find these ideas, a person (or computer) must detect the patterns of word use – similar to the finding a pattern in a signal. For example, as a child I knew I was in trouble when my mother called me by my first and last name – the combination of these words created a “signal” that was different than when she just used my first name. Similarly, a legal concept has a different meaning if two words occur together, such as “written consent” than if it only uses the word “consent.”

The (Conventional) Machine Learning Perspective

It’s almost impossible to program a computer to find such “faint signals” within a large number of documents. To do so would require a computer to be programmed to find all possible combinations of words for a given idea to search and match.

Machine learning technologies enable computers to identify features within the data to detect patterns. The computer “learns” by recognizing the combinations of features as patterns.

[There are many forms of machine learning – so I will keep focused only on those related to our text analytics problem.]

Natural Language Processing

One of the most important forms of machine learning for text analytics is natural language processing (NLP). NLP tools are very good at codifying the rules of language for computers to detect linguistic features – such as parts of speech, named entities, etc.

However (at the time of this writing), most NLP systems can’t detect patterns unless they are explicitly programmed or trained to do so. Linguistic patterns are very domain specific. The language used in medicine is different than what is used in law, etc. Thus, NLP is not easily generalized. NLP only works in specific situations where there is predictable syntax, semantics and context. IBM Watson can play Jeopardy! but has had tremendous problems finding commercial applications in marketing or medical records processing. Very few organizations have the budget or expertise to train NLP systems. They are left to either buy an off-the-shelf solution (such as StoredIQ ) or hire a team of PhDs to modify one of the open-source NLP tools. Good luck.

Latent Analysis Techniques

Tools such as latent semantic analysis (LSA), latent semantic indexing (LSI) and latent Dirichlet allocation (LDA) are all capable of detecting patterns within language. However, they require tremendous expertise to implement and often require large numbers of training documents. LSA and LSI are computationally expensive because they must recalculate the relationships between features each time they are given something new to learn. Thus, learning the meaning of the 1,001th document requires a calculation across the 1,000 previously learned documents. LSA uses a statistical approach called single variable decomposition to isolate keywords. Unlike LSA, ai-one’s technology also detects the association words that give a keyword context.

Similar to our ai-Fingerprint approach, LDA uses a graphical model for topic discovery. However, it takes tremendous skill to develop applications using LDA. Even when implemented, it requires the user to make informed guesses about the nature of the text. Unlike LDA, ai-one’s technology can be learned in a few hours. It requires no supervision or human interaction. It simply detects the inherent semantic value of text – regardless of language.

Our First Intelligent Agent Prototype: ai-BrainDocs

It took our team about a month to build the initial version of ai-BrainDocs. Our team used ai-one’s keyword and association commands to generate a graph for each document. This graph goes into MongoDB as a JSON object that represents the knowledge (content) of each document.
Next we created an easy way to build intelligent agents. We simply provide the API with examples of concepts we want to find. This training set can be very short. For one type of legal contracts, it only took 4 examples of text for the intelligent agent to achieve 90% accuracy in finding similar concepts.

Unlike solutions that use LSI, LDA and other technologies, the intelligent agents in ai-BrainDocs finds ideas at the paragraph level. This is a huge advantage when looking at large documents – such as medical research or SEC filings.

Next we built an interface that allows the end-user to control the intelligent agents by setting thresholds for sensitivity and determining how many paragraphs to scan at a time.

Our first customers are now testing ai-BrainDocs – and so far they love it. We expect to learn a lot as more people use the tool for different purposes. We are looking forward to developing ways for intelligent agents to interact – just like people – by comparing what they find within documents. We are finding that it is best for each agent to specialize in a specific subject. So finding ways for agents to compare their results using Boolean operators enables them to find similarities and differences between documents.

One thing is clear: Intelligent agents are ideal for mining unstructured text to find small ideas hidden in big data.

We look forward to reporting more on our work with ai-BrainDocs soon.

Posted by: Olin Hyde

Building Intelligent Agents: Google Now versus Apple SIRI?

December 14th, 2012

It has been a long time since our last blog post. Why? We’ve been busy learning how to build better intelligent agents.

Today, Kurt and I were discussing ways to improve feature detection algorithms for use in a prototype application called ai-BrainDocs. This is a system that detects concepts within legal documents. This is a hard problem because legal concepts (or ideas) use the same words. That is, there are no distinguishing features in the text.

ai-one’s technology is able to solve this problem by understanding how the same word (keyword) can mean different things by its context (as defined by association words). Together, keywords and associations create an array that we call an ai-Fingerprint. This can be thought of as a graph that can be represented as G[V,E]. ai-Fingerprints are easy to build using our Topic-Mapper API.

We pondered how the intelligent agents for Android developed by Google (called Google Now) and Apple iOS (called SIRI) might perform on a simple test. We picked a use case where the words were sparse but unique — looking for the status for a departing zoloft online no rx flight on American Airlines. Both Google Now and Apple SIRI have a tremendous advantages over ai-one because they: 1) have a lot more money to spend on R&D, 2) use expensive voice recognition technologies, and 3) they store all queries made by every user so they can apply statistical  machine learning to refine results from natural language processing (NLP).

Unlike Apple and Google, ai-one’s approach is not statistical. We use a new form of artificial neural network (ANN) that detects features and relationships without any training or human intervention.  This enables us to do something that Google and Apple can’t: Autonomic learning. This is a huge advantage for situations where you need to develop machine learning applications to find information where you can’t define what you are seeking. This is common in so-called “Big Data” problems. It is also much cheaper, faster and accurate than using the statistical machine learning tools that Apple and Google are pushing.


Posted by: Olin Hyde

Lead Analyst Firm Names ai-one “Who’s Who in Text Analytics”

September 19th, 2012

ai-one evaluated as machine learning for text vendor

We are proud to report that the *Gartner cites ai-one in their September 14 report Who’s Who in Text Analytics. Analysts Daniel Yuen and Hans Koehler-Kruener based this report on a survey of 55 vendors conducted in April 2012.  Vendors were included based on offering distinct text analytics offerings, not those whose text analytics technology is part of another product.  ai-one offers a general purpose, autonomic machine learning tool that can be embedded within other applications. Earlier this year, Gartner named ai-one as one of the “Cool Vendors 2012”* for content analytics. We believe the coverage of ai-one as a text analytics provider indicates the importance that Gartner places on the ability to evaluate information that cannot be processed using traditional tools that depend on looking at tables, rows and models.

“Language is not math.”

ai-one uses a completely new form of machine learning to detect the meaning of text. The technology evaluates any length of text to isolate keywords and associations. The keywords are the most important words – the words that are central to the meaning of the document. The association words are the words that give the keywords context.

“Making sense of short text.”

Text analytics is particularly difficult for short texts – such as social media feeds from Facebook and Twitter. Humans are great at seeing the meaning in a few words. Computers are not.

ai-one’s context detection technology provides a easy  solution to this problem. For example, our technology can learn the meaning of a very short text, such as a tweet: “Will Google eat Apple with the new J2ObjC?” It immediately detects the keywords ‘Google,’ ‘Apple’ and ‘J2ObjC’ and the associations ‘eat’ and ‘new.’ The system will learn the meaning of these words by adding additional association words to the keywords as it is fed additional tweets. The more tweets, the more it learns.  No human intervention or training sets are required – although the system learns faster if it is taught. In many ways, ai-one’s technology learns just like a human. It detects context by evaluating the associations of words. Most impressive, it forms concepts by connecting together groups of associations.

 “ai-one thinks different.”

This approach is radically different than the rules-based approach used by IBM and the Bayesian statistical approaches of SAS and Autonomy. ai-one is purely a pattern recognition tool for multiple higher order concepts. It finds the inherent meaning in any text by simply seeing how words connect with each other. Unlike AlchemyAPI, Textifier and other competitors that use ontologies connected to natural language processing (NLP), our technology works equally well in any language.

Prelude to the debut of NathanApp

ai-one’s Topic-Mapper SDK and API will soon be replaced with a cloud-deployable API called NathanApp. NathanApp & NathanNode are REST services where we offer a complete analytics solution as a service.  NathanCore is the native technology where customers build their own interfaces using REST or any other standard. ai-one also plans to offers an open source infrastructure to NathanCore and NathanApp/Node where REST, JSON, and other functions and services are offered as open source code.  Details of NathanApp will be released in a future press release… But it is safe to say that ai-one’s research and development team have spent almost two years developing new technology that will enable ai-one technology to be used by anyone, anywhere on any device.

We are very proud that Gartner has acknowledged ai-one as a Who’s Who and Cool Vendor. Moreover, we look forward to showing you very soon how NathanApp will change everything: Nathan will be the first intelligent agent that any developer can embed in any application. This is what ai-one considers a “smarter planet.”

*Gartner, Inc., Cool Vendors in Content Analytics, Rita L. Sallam, et al, April 26, 2012.  Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings.  Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact.  Garner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.


Gartner benennt ai-one im “Who’s Who in Text Analytics”

ai-one ist als führende Firma für “machine learning” im Textbereich aufgeführt

Stolz dürfen wir verkünden, das die Gartner ai-one als eine führende Technologie Firma für Textanalyse in neusten Forschungsbericht, dem Who’s Who in Text Analytics vom 14. September aufgeführt hat. Die Analysten Daniel Yuen  und Hanns Koehler-Kruener haben insgesamt 28 Hersteller, inklusive den Industrie- “Schwergewichten” IBM, SAS, SAP und Autonomy untersucht und verglichen.  ai-one wird dabei als einziger Hersteller mit einer unabhängigen Universalanwendung für Aufgabenübergreifende Lösungen aufgeführt.  Bereits im Frühjahr hatte die Gartner  ai-one als “Cool Vendors 2012” für Kontext Analyse gewählt. Gartner bezeichnet Kontext Analyse als eine der wichtigsten zukünftigen Aufgaben im Rahmen der Business Intelligenz Anwendungen, weil es sowohl strukturierte wie auch unstrukturierte Inhalte analysieren und deren Sinn erkennen kann. Die Art wie ai-one als Leader in der Text Analyse dargestellt wird zeigt deutlich, wie wichtig Gartner dieses Thema bewertet. Gartner unterstützt zudem deutlich die neuen Ansätze in der Textanalyse weil Gartner die Wichtigkeit von intelligenten Werkzeugen deutlich machen möchte, welche über das Benutzen von Tabellen und Modellen herausgeht.

“Sprache ist keine Mathematik.”

Im Unterschied zu den anderen im Report gelisteten Firmen hat ai-one einen neuen Ansatz wie maschinelles Lernen intelligenter und präziser gestaltet werden kann. Der ai-one Ansatz kann Texte in jeder Länge analysieren und erkennt spontan Sinn und Schlagworte. Diese Schlagworte „KeyWords” sind die wichtigsten Worte welche in der Kombination den Sinn in einem Text bestimmen. Weiter erkennt ai-one die Assoziations-Worte welche den Schlagworten den Kontextzusammenhang geben.

“ai-one erkennt die Bedeutungen  selbst in kurzen Texten“.

Textanalyse und Sinnerkennung ist vor allem in kurzen Texten sehr schwierig. In Feeds, Tweet‘s und Facebook stehen oft nur kurze Sätze, welche aber in der Fülle durchaus Sinn machen. Ausser ai-one basieren alle anderen Hersteller in Gartners Report auf Sprachabhängigen Regelsystemen und Umweltmodellen.

ai-one kann selbst einen sehr kurzen Text „Tweet“ analysieren wie: “Will Google eat Apple with the new J2ObjC?” Sofort wird automatisch das Wort ‘Google,’ ‘Apple’ und ‘J2ObjC’als Schlagwort erkannt, sowie die Assoziation ‘eat’ and ‘new‘. die ai-one Technologie lernt spontan die Bedeutung der Worte aus dem Zusammenhang mit anderen Tweet‘s. Je mehr Tweet‘s vorhanden sind zu einem Thema, desto exakter versteht ai-one spontan den Sinn und die Zusammenhänge. Je mehr Tweet‘s umso schlauer wird ai-one. Es ist also keine manuelle Intervention nötig – ai-one lernt schneller und bessert je mehr Inhalt vorhanden ist. Man kann sagen, ai-one’s Technologie lernt wie ein Mensch. Sie erkennt den Sinn und die Bedeutungen aus dem Zusammenhang in denen die einzelnen Worte verwendet werden. Darüber hinaus ist ai-one in der Lage verschachtelte Konzepte aus zusammenhängenden Assoziationen zu erkennen.

Im Unterschied zu den andren im Report gelisteten Firmen hat ai-one einen neuen Ansatz wie maschinelles Lernen intelligenter und präziser gestaltet werden kann. Der ai-one Ansatz kann Texte in jeder Länge analysieren und erkennt spontan Sinn und Schlagworte. Diese Schlagworte „KeyWords” sind die wichtigsten Worte welche in der Kombination den Sinn in einem Text bestimmen. Weiter erkennt ai-one die Assoziations-Worte welche den Schlagworten den Kontextzusammenhang geben. ai-one kann selbst einen sehr kurzen Text „Tweet“ analysieren wie: “Will Google eat Apple with the new J2ObjC?” Sofort wird automatisch das Wort ‘Google,’ ‘Apple’ und ‘J2ObjC’als Schlagwort erkannt, sowie die Assoziation ‘eat’ and ‘new‘. Die ai-one Technologie lernt spontan die Bedeutung der Worte aus dem Zusammenhang mit anderen Tweet‘s. Je mehr Tweet‘s vorhanden sind zu einem Thema, desto exakter versteht NathanCore spontan den Sinn und die Zusammenhänge. Je mehr Tweet‘s umso schlauer wird NathanCore. Es ist also keine manuelle Intervention nötig – Nathan lernt schneller und bessert je mehr Inhalt vorhanden ist. Man kann sagen, ai-one’s NathanCore lernt wie ein Mensch. Er erkennt den Sinn und die Bedeutungen aus dem Zusammenhang in denen die einzelnen Worte verwendet werden. Darüber hinaus ist NathanCore in der Lage verschachtelte Konzepte aus zusammenhängenden Assoziationen zu erkennen.

ai-one denkt anders

ai-one verfolgt einen radikal anderen Ansatz als die model- und regelbasierten Systeme welche IBM, SAS oder SAP anwenden. Bayesian und die Statistischen Ansätze können zwar Muster erkennen, benötigen aber immer Modelle und statische Regelwerke.  ai-one’s Nathan findet die inhärente (innewohnenden) Beziehungen und Bedeutungen aus dem Text, weil es die semantischen Verbindungen und assoziativen Bedeutungen erkennt.  Ontologien oder Thesauri, sowie NLP dienen ai-one als Ergänzung und Verfeinerung der Deutungen. Vor allem dann wenn der Text selber in ungenügender Qualität vorliegt. Der ai-one Core ist absolut Sprachunabhängig.

Vorschau auf das Debüt von NathanApp

Der Gartner Report wurde im Juni 2012 evaluiert und ist somit schon fast wieder überholt. ai-one’s damals untersuchtes Topic-Mapper SDK/API ist in der Zwischenzeit mit NathanCore ersetzt worden. ai-one veröffentlicht in Kürze die neue Generation NathanApp, NathanNode & NathanCore. NathanApp & NathanNode sind REST Services als Komplettlösung. NathanCore ist die Basistechnologie in welcher Kunden ihre eigenen Lösungen und Infrastrukturen bauen können. ai-one offeriert zusätzlich open source Infrastruktur mit REST, JSON. Die neuen Versionen werden bald über Pressemitteilung bekannt gemacht. Wir dürfen allerdings schon jetzt verkünden, dass das ai-one Team mehr als 2 Jahre investiert hat, um die Technologie grundlegend zu erweitern damit sie in den neuen Systemarchitekturen (z.B. Cloud) optimal eingesetzt werden kann. Wir sind stolz über die Gartner Bewertungen. Nathan App ist der erste intelligente Agent von ai-one welcher durch jeden Entwickler einfach und mit wenigen Klicks in eine Lösung integriert werden kann. Das ist ai-one’s Beitrag zu einem “smarter planet.”

Posted by: Olin Hyde

Big Win for AI Entrepreneurs

August 30th, 2012

Vicarious Wins $15m in Funding for AI Research – GigaOm

GigaOm reported last Tuesday that entrepreneurs D. Scott Phoenix and Dileep George raised $15 million from prominent venture capitalists Dustin Moskovitz and Peter Thiel.   As Stacey Higginbotham reports, “Vicarious wants to build a series of algorithms that mimic the way the mammalian brain processes and applies information — in short it wants to build software that will grant computers intelligence.”  While we would take issue with Vicarious on the technical fine points, followers of ai-one have been surprised to learn we were happy to hear the news from Vicarious.

Even though it might appear they are a competitive threat, this is great news for ai-one and others working in this field.  First of all, this field is full of enormous challenges and more credible entrepreneurs in the field will accelerate progress through both competition and collaboration.  There is no one answer to the question of intelligence.  Intelligent agents already provide value across many businesses. The more data generated and the more digitally interconnected we become, the more the benefits move from helpful to essential.

Additionally, fundraising is incredibly painful, especially in a field that has been around for decades and several turns of the hype cycle.  The news in this field has been either academic or dominated by Google, IBM and Apple, causing investors to sit on the sideline when it comes to funding new companies.  Leaving zoloft sale online this field of research to them will decelerate progress and more likely result in patent wars and concentration of power, not progress.  As I stated in my interview with Derrick Harris in May, “If we don’t democratize access to AI techniques, we’re essentially handing the keys over to IBM and Google…”.

Thiel and Moskowitz have conveyed some legitimacy on the field and hopefully will be good stewards of Vicarious’ work while not expecting short term VC-like build and flip execution.  Patience is critical in this field as the technical challenges are great and commercialization an even greater one.

Vicarious’ goal is to help humanity thrive by inventing the algorithm(s) to create intelligent machines.

ai-one’s  mission is to enable biologically inspired intelligence in every computing device and application. We want to empower developers to help people to use intelligent computing to protect and better their lives.

We believe these goals are complementary, not competitive.  We hope other new companies in the field will bring similar values, energy and brilliance to the “Mt. Everest of computer science problems”.  It may be a “field of giants” today, but we hope the “computer” you buy in the near future will have intelligence as uniquely personal as you are.

Welcome to the mountain Vicarious.  We’ve been at this over nine years and there’s plenty of room, but let’s get to the top before Google buys the mountain.

Tom Marsh, President

Gartner Names ai-one Cool Vendor 2012 for Content Analytics

May 15th, 2012

Gartner Cool Vendor in Content Analytics, 2012


*GARTNER named ai-one in Cool Vendors in Content Analytics, 2012. The report reviews five vendors from around the world that offer potentially disruptive innovations for analyzing data to find actionable insights. Unlike traditional business intelligence solutions, these vendors provide technologies that can understand multiple types of information — including both structured and unstructured data.

The core value of ai-one’s technology is to make it easy for programmers to build intelligence into any application. Our APIs provide a way to mimic the way people detect patterns. “This is why we call it biologically inspired intelligence,” says founder and CEO

Answering the Most Important Questions, Mr. Walter Diggelmann, “because it works just like the human brain.”

 These companies have received tremendous publicity. Both are funded by traditional Silicon Valley venture capital firms. No surprise that they strive to provide comprehensive machine learning solutions rather than a tool for the general programming public.

“We do something completely different! We provide a general purpose tool that you can combine with other technologies to solve a specific problem. We do not try to do everything. Rather we just do one thing: We find the answer to the question you didn’t know to ask.” says Diggelmann

The advantage of ai-one’s approach to developers is that using the API is easy. The tool finds the inherent meaning of any data by detecting patterns. For example, feed it text and it will find every keyword and determine the association where to buy zoloft online words that give each keyword context. Together, keywords and associations provide a complete and accurate summary of a document. The API gives precise results almost instantly and does not require any specialized training to use. Moreover, it is autonomic — as it works without any human intervention.

ai-one follows a technology licensing model — much like Qualcomm. The company makes money when licensees embed the API into commercial applications. ai-one works closely with its OEM partners to ensure that their products are successful.

ai-one’s technology enables programmers to build hybrid analytics solutions that integrate content from almost any digital source, in any language, regardless of its structure (or lack of structure). This capability has the potential to transform the way we think about business intelligence. “90% of the world’s data is unstructured,” says Diggelmann, “but 100% of the major business intelligence systems can’t read or understand it.  We provide a tool to bridge the gap.”

*Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings.  Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact.  Garner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.