Archive for the ‘text analytics’ Category

Rumsfeld Conundrum- Finding the Unknown Unknown

Tuesday, January 27th, 2015

Since we began the process of building applications using our AI engine, we have been focused on working with ideas or concepts. With BrainDocs we built intelligent agents to find and score similarity for ideas in paragraphs, but still fell short of the vision we have for our solution. Missing was an intuitive and visual UI to explore content interactively using multiple concepts and  metadata (like dates, locations, etc). We want to give our users the power to create a rich and personal context to power through their research. What do I call this?

Some Google research led me to a great visualization and blog by David McCandless on the Taxonomy of Ideas. While the words in his viz are attributes of ideas, not the ideas themselves, it got me thinking in different ways about the problem.

Taxonomy of Ideas

If you substitute an idea (product or problem) in David’s matrix and add the dimension of time, you create a useful framework. If the idea above was “car”, then the top right might be Tesla and bottom left a Yugo (remember those?). Narrow the definition to “electric car” or generalize to “eco-friendly personal transportation” and the matrix changes. But insert an unsolved problem and now you have trouble applying the attributes. You also arrive at an innovator’s dilemma (not the seminal book by Clayton Christensen), the challenge of researching something that hasn’t been labeled and categorized yet.

Ideas begin in someone’s head. With research, debate, and engineering, they become products. Products have labels and categories that facilitate communication, search and commerce. The challenge for idea search on future problems is that the opposite occurs: products are not yet ideas and the problems they solve may not have been defined yet. If I may, Donald Rumsfeld nailed the problem with this famous quote:

“There are known knowns. These are things we know that we know. There are known unknowns. That is to say, there are things that we know we don’t know. But there are also unknown unknowns. There are things we don’t know we don’t know.”

And if it’s an unknown unknown, it certainly hasn’t been labeled yet so how do you search for it? Our CEO Walt Diggelmann used to say it this way, “ai-one gives you an answer to a question, you did not know that you have to ask….!

Innovators work in this whitespace.

If you could build and combine different intelligent (idea) agents for problems as easily as you test different combinations of words in a search box, you could drive an interactive and spontaneous exploration of ideas. In some ways this is the gift of our intelligence. New ideas and innovation are in great part combinatorial, collaborative and stimulated by bringing together seemingly unrelated knowledge to find new solutions.

Instead of pumping everything into your brain (or an AI) and hoping the ideas pop out, we want to give you the ability to mix combinations of brains, add goals and constraints and see what you can create. Matt Ridley termed this “ideas having sex”. This is our goal for Topic-Mapper (not the sex part).

So what better place to apply this approach than to the exploration of space? NASA already created a “taxonomy of ideas” for the missions of the next few decades. In my next blog I’ll describe the demo we’re working on for the grandest of the grand challenges, human space exploration.

Tom

ai-one and the Machine Intelligence Landscape

Monday, January 12th, 2015

In the sensationally titled Forbes post, Tech 2015: Deep Learning And Machine Intelligence Will Eat The World, author Anthony Wing Kosner surveys the impact of deep learning technology in 2015. This is nothing new for those in the field of AI. His post reflects the recent increase in coverage artificial intelligence (AI) technologies and companies are getting in business and mainstream media. As a core technology vendor in AI for over ten years, it’s a welcome change in perspective and attitude.

We are pleased to see ai-one correctly positioned as a core technology vendor in the Machine Intelligence Landscape chart featured in the article. The chart, created by Shivon Zilis, investor at BloombergBETA, is well done and should be incorporated into the research of anyone seriously tracking this space.

Especially significant is Zilis’ focus on “companies that will change the world of work” since these are companies applying AI technologies to innovation and productivity challenges across the public and private sectors. The resulting solutions will provide real value through the combination of domain expertise (experts and data) and innovative application development.

This investment thesis is supported by the work of Erik Brynjolfsson and Andrew McAfee in their book “The Second Machine Age”, a thorough discussion of value creation (and disruption) by the forces of innovation that is digital, exponential and combinatorial. The impact of these technologies will change the economics of every industry over years if not decades to come. Progress and returns will be uneven in their impact on industry, regional and demographic sectors. While deep learning is early in Gartner’s Hype Cycle, it is clear that the market value of machine learning companies and data science talent are climbing fast.

This need for data scientists is growing but the business impact of AI may be limited in the near future by the lack of traditional developers who can apply them. Jeff Hawkins of Numenta has spoken out on this issue and we agree. It is a fundamentally different way to create an application for “ordinary humans” and until the “killer app” Hawkin’s speaks about is created, it will be hard to attract enough developers to invest time learning new AI tools. As the chart shows, there are many technologies competing for their time. Developers can’t build applications with buzzwords and one size fits all APIs or collections of open source algorithms. Technology vendors have a lot of work to do in this respect.

Returning to Kosner’s post, what exactly is deep learning and how is it different from machine learning/artificial intelligence? According to Wikipedia,

Deep learning is a class of machine learning training algorithms that use many layers of nonlinear processing units for feature extraction and transformation. The algorithms may be supervised or unsupervised and applications include pattern recognition and statistical classification.

  • are based on the (unsupervised) learning of multiple levels of features or representations of the data. Higher level features are derived from lower level features to form a hierarchical representation.
  • are part of the broader machine learning field of learning representations of data.
  • learn multiple levels of representations that correspond to different levels of abstraction; the levels form a hierarchy of concepts.
  • form a new field with the goal of moving toward artificial intelligence. The different levels of representation help make sense of data such as images, sounds and texts.

These definitions have in common (1) multiple layers of nonlinear processing units and (2) the supervised or unsupervised learning of feature representations in each layer, with the layers forming a hierarchy from low-level to high-level features.

While in the 4th bullet this is termed a new field moving toward artificial intelligence, it is generally considered to be part of the larger field of AI already. Deep learning and machine intelligence is not the same as human intelligence. Artificial intelligence in this definition above and in the popular press usually refers to Artificial General Intelligence (AGI). AGI and the next evolution, Artificial Super Intelligence (ASI) are the forms of AI that Stephen Hawking and Elon Musk are worried about.

This is powerful stuff no question, but as an investor, user or application developer in 2015 look for the right combination of technology, data, domain expertise, and application talent applied to a compelling (valuable) problem in order to create a disruptive innovation (value). This is where the money is over the new five years and this is our focus at ai-one.

Tom

Personal AI Helps Convert Social CRM for Recruiting

Thursday, June 26th, 2014

Given the need for more effective content marketing and better quality lead generation, why aren’t the tools better?  Certainly there are lots of applications, SaaS products and services available for all parts of the marketing and sales process.   With BrainBrowser we provide a tool that can understand the content from marketing and match it to bloggers, LinkedIn connections, Twitter followers and find candidates in places you would never look.

Since about one-third of the 7,500+ queries by our testers were using BrainBrowser to search for people, a key objective is to add features to manage the results and integrate them into your workflow.  If you find someone relevant to your work or a potential recruit, you should be able to connect with them right from the list, follow them on Twitter or share lists of candidates with collaborators.

BrainBrowser with Nimble Popup

As a recruiting professional your task is to find the candidates and conversations on the web where conversions will be maximized and get there first.  BrainBrowser does this for you, creating a list of people, companies and sites that match the content of your position and company description.

As a sales professional, you want to use content, either from your marketing department or content you find and create on your own, to engage your network and to identify the people that are talking about and responsible for buying/influencing a purchase.

In our research (using BrainBrowser) we discovered Nimble and a new category of Social CRM vendors with applications driving social selling (check out Gerry Moran’s post for background on content and social selling).  We were immediately hooked and started using Nimble as our company CRM but quickly found it worked well for managing lists of candidates.

Nimble, a new social CRM application, has made integration easy and I’m recommending it to everyone.  All you need to do is sign up for the trial (its only $15 per month if you like it) and install the plug in in your Chrome browser.  You’ll then be able to highlight the name of the person on the list in BrainBrowser, right click, select the Nimble Search and a popup will display the person’s social media pages in LinkedIn, Twitter, Google+ etc.  Click Save and you’ve added them to your Nimble Contacts where you can then view their social media messages, profile and decide whether to connect or follow.   Tag them and you’ve creating a recruiting hot list you can track in Nimble.

Here’s a video clip I tweeted to CEO Jon Ferrara demonstrating how/why we love it.  This was in response to his video clip to Larry Nipon following up on my referral.

Let me know how you like it.  They do a great job but if you have any questions on the difference between CRM and Social CRM, and how we’re using it for recruiting.  Be sure to add @ai_one or @tom_semantic if you tweet about this and sign up to request a login for BrainBrowser.

As of today, there are only 22 slots left for FREE registrations under the Alpha test program.  Participation gets you a year free on the platform.  Email or tweet @tom_semantic to sign up.

ai-one Contributes to ETH Publication on Knowledge Representation

Tuesday, June 3rd, 2014

We are pleased to announce the availability of the following publication from prestigious ETH University in Zurich.  This book will be a valuable resource to developers, data scientists, search and knowledge management educators and practitioners trying to deal with the massive amounts of information in both public and private data sources.  We are proud to have our contribution to the field acknowledged in this way.

Knowledge Organization and Representation with Digital Technologies

http://www.degruyter.com/view/product/205460  |  ISBN: 978-3-11-031281-2

ai-one was invited to contribute as co-author to a chapter in this technical book.

ETH Publication- Knowledge RepresentationIn the anthology readers will find very different conceptual and technological methods for modeling and digital representation of knowledge for knowledge organizations (universities, research institutes and educational institutions), and companies based on practical examples presented in a synopsis. Both basic models of the organization of knowledge and technical implementations are discussed including their limitations and difficulties in practice.  In particular the areas of knowledge representation and the semantic web are explored. Best practice examples and successful application scenarios provide the reader with a knowledge repository and a guide for the implementation of their own projects. The following topics are covered in the articles:

  •  hypertext-based knowledge management
  • digital optimization of the proven analog technology of the list box
  • innovative knowledge organization using social media
  • search process visualization for digital libraries
  • semantic events and visualization of knowledge
  • ontological mind maps and knowledge maps
  • intelligent semantic knowledge processing systems
  • fundamentals of computer-based knowledge organization and integration

The book also includes coding medical diagnoses, contributions to the automated creation of records management models, business fundamentals of computer-aided knowledge organization and integration, the concept of mega regions to support of search processes and the management of print publications in libraries.

Available in German only at this time.

Wissensorganisation und -repräsentation mit digitalen Technologien

http://www.degruyter.com/view/product/205460  |  ISBN: 978-3-11-031281-2

ai-one war eigeladen worden, als CO-Autor ein Kapitel in diesem Sachbuch beizusteuern.

Im Sammelband werden die sehr unterschiedlichen konzeptionellen und technologischen Verfahren zur Modellierung und digitalen Repräsentation von Wissen in Wissensorganisationen (Hochschulen, Forschungseinrichtungen und Bildungsinstitutionen) sowie in Unternehmen anhand von  praxisorientierten Beispielen in einer Zusammenschau vorgestellt. Dabei werden sowohl grundlegende Modelle der Organisation von Wissen als auch technische Umsetzungsmöglichkeiten sowie deren Grenzen und Schwierigkeiten in der Praxis insbesondere in den Bereichen der Wissensrepräsentation und des Semantic Web ausgelotet. Good practice Beispiele und erfolgreiche Anwendungsszenarien aus der Praxis bieten dem Leser einen Wissensspeicher sowie eine Anleitung zur Realisierung von eigenen Vorhaben. Folgende Themenfelder werden in den Beiträgen behandelt:

  • Hypertextbasiertes Wissensmanagement
  • digitale Optimierung der erprobten analogen Technologie des Zettelkastens
  • innovative Wissensorganisation mittels Social Media
  • Suchprozessvisualisierung für Digitale Bibliotheken
  • semantische Event- und Wissensvisualisierung
  • ontologische Mindmaps und Wissenslandkarten
  • intelligente semantische Wissensverarbeitungssysteme

sowie Grundlagen der computergestützten Wissensorganisation und -integration, das Konzept von Mega-Regionen zur Unterstützung von Suchprozessen und zum Management von Printpublikationen in Bibliotheken, automatisierte Kodierung medizinischer Diagnosen sowie Beiträge zum Records Management zur Modellbildung und Bearbeitung von Geschäftsprozessen.

ai-one named Finalist in SDBJ Innovation Awards for 2013

Thursday, June 27th, 2013

At the San Diego Business Journal Annual Innovation Award event, ai-one was named a finalist in the technology category. The award was presented at the prestigious event on June 18th at Scripps, attended by several hundred leaders in San Diego’s tech, medical, software and telecom industries. ai-one received the award for its leading edge technology in machine learning and content analytics, as evidenced by the release this year of the new Nathan API for deep learning applications.

The award was accepted by ai-one COO Tom Marsh and partner for defense and intelligence, Steve Dufour, CEO of ISC Consulting of Arizona.

Tom Marsh & Steve Dufour at SDBJ Innovation Awards

Tom Marsh & Steve Dufour at SDBJ Innovation Awards

Ai-one’s Artificial Brain’ Has a Real Eye for Data SDBJ

TECH: Software Can Dig Through and Decipher Information

Software writer ai-one Inc. doesn’t just promise code. The company promises to pull new perspectives and second opinions from seemingly inscrutable data.

SDDT recognizes ai-one’s presentation at CommNexus to SK Telecom of North Korea

Thursday, June 27th, 2013

ai-one was recognized for its participation in the CommNexus MarketLink event June 4th in San Diego California. The event featured companies from all across the US selected by SK Telecom for their potential to add value to SK Telecom’s network. The meeting was also attended by SK’s venture group based in Silicon Valley.
 
Tierney Plumb of the San Diego Daily Transcript reported, “San Diego-based ai-one inc. pitched its offerings Tuesday to the mobile operator. The company, which has discovered a form of biologically inspired neural computing that processes language and learns the way the brain does, was looking for two investments — each about $3 million — from SK. One is a next-generation Deep Personalization Project whose goal is to create an intimate personal agent while providing the user with total privacy control. ”
 
For the full text of this article click  San Diego Source _ Technology _ Startups line up to meet with SK Telecom

Building Intelligent Agents: Google Now versus Apple SIRI?

Friday, December 14th, 2012

It has been a long time since our last blog post. Why? We’ve been busy learning how to build better intelligent agents.

Today, Kurt and I were discussing ways to improve feature detection algorithms for use in a prototype application called ai-BrainDocs. This is a system that detects concepts within legal documents. This is a hard problem because legal concepts (or ideas) use the same words. That is, there are no distinguishing features in the text.

ai-one’s technology is able to solve this problem by understanding how the same word (keyword) can mean different things by its context (as defined by association words). Together, keywords and associations create an array that we call an ai-Fingerprint. This can be thought of as a graph that can be represented as G[V,E]. ai-Fingerprints are easy to build using our Topic-Mapper API.

We pondered how the intelligent agents for Android developed by Google (called Google Now) and Apple iOS (called SIRI) might perform on a simple test. We picked a use case where the words were sparse but unique — looking for the status for a departing flight on American Airlines. Both Google Now and Apple SIRI have a tremendous advantages over ai-one because they: 1) have a lot more money to spend on R&D, 2) use expensive voice recognition technologies, and 3) they store all queries made by every user so they can apply statistical  machine learning to refine results from natural language processing (NLP).

Unlike Apple and Google, ai-one’s approach is not statistical. We use a new form of artificial neural network (ANN) that detects features and relationships without any training or human intervention.  This enables us to do something that Google and Apple can’t: Autonomic learning. This is a huge advantage for situations where you need to develop machine learning applications to find information where you can’t define what you are seeking. This is common in so-called “Big Data” problems. It is also much cheaper, faster and accurate than using the statistical machine learning tools that Apple and Google are pushing.

 

Posted by: Olin Hyde

Lead Analyst Firm Names ai-one “Who’s Who in Text Analytics”

Wednesday, September 19th, 2012

ai-one evaluated as machine learning for text vendor

We are proud to report that the *Gartner cites ai-one in their September 14 report Who’s Who in Text Analytics. Analysts Daniel Yuen and Hans Koehler-Kruener based this report on a survey of 55 vendors conducted in April 2012.  Vendors were included based on offering distinct text analytics offerings, not those whose text analytics technology is part of another product.  ai-one offers a general purpose, autonomic machine learning tool that can be embedded within other applications. Earlier this year, Gartner named ai-one as one of the “Cool Vendors 2012”* for content analytics. We believe the coverage of ai-one as a text analytics provider indicates the importance that Gartner places on the ability to evaluate information that cannot be processed using traditional tools that depend on looking at tables, rows and models.

“Language is not math.”

ai-one uses a completely new form of machine learning to detect the meaning of text. The technology evaluates any length of text to isolate keywords and associations. The keywords are the most important words – the words that are central to the meaning of the document. The association words are the words that give the keywords context.

“Making sense of short text.”

Text analytics is particularly difficult for short texts – such as social media feeds from Facebook and Twitter. Humans are great at seeing the meaning in a few words. Computers are not.

ai-one’s context detection technology provides a easy  solution to this problem. For example, our technology can learn the meaning of a very short text, such as a tweet: “Will Google eat Apple with the new J2ObjC?” It immediately detects the keywords ‘Google,’ ‘Apple’ and ‘J2ObjC’ and the associations ‘eat’ and ‘new.’ The system will learn the meaning of these words by adding additional association words to the keywords as it is fed additional tweets. The more tweets, the more it learns.  No human intervention or training sets are required – although the system learns faster if it is taught. In many ways, ai-one’s technology learns just like a human. It detects context by evaluating the associations of words. Most impressive, it forms concepts by connecting together groups of associations.

 “ai-one thinks different.”

This approach is radically different than the rules-based approach used by IBM and the Bayesian statistical approaches of SAS and Autonomy. ai-one is purely a pattern recognition tool for multiple higher order concepts. It finds the inherent meaning in any text by simply seeing how words connect with each other. Unlike AlchemyAPI, Textifier and other competitors that use ontologies connected to natural language processing (NLP), our technology works equally well in any language.

Prelude to the debut of NathanApp

ai-one’s Topic-Mapper SDK and API will soon be replaced with a cloud-deployable API called NathanApp. NathanApp & NathanNode are REST services where we offer a complete analytics solution as a service.  NathanCore is the native technology where customers build their own interfaces using REST or any other standard. ai-one also plans to offers an open source infrastructure to NathanCore and NathanApp/Node where REST, JSON, and other functions and services are offered as open source code.  Details of NathanApp will be released in a future press release… But it is safe to say that ai-one’s research and development team have spent almost two years developing new technology that will enable ai-one technology to be used by anyone, anywhere on any device.

We are very proud that Gartner has acknowledged ai-one as a Who’s Who and Cool Vendor. Moreover, we look forward to showing you very soon how NathanApp will change everything: Nathan will be the first intelligent agent that any developer can embed in any application. This is what ai-one considers a “smarter planet.”

*Gartner, Inc., Cool Vendors in Content Analytics, Rita L. Sallam, et al, April 26, 2012.  Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings.  Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact.  Garner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

 

Gartner benennt ai-one im “Who’s Who in Text Analytics”

ai-one ist als führende Firma für “machine learning” im Textbereich aufgeführt

Stolz dürfen wir verkünden, das die Gartner ai-one als eine führende Technologie Firma für Textanalyse in neusten Forschungsbericht, dem Who’s Who in Text Analytics vom 14. September aufgeführt hat. Die Analysten Daniel Yuen  und Hanns Koehler-Kruener haben insgesamt 28 Hersteller, inklusive den Industrie- “Schwergewichten” IBM, SAS, SAP und Autonomy untersucht und verglichen.  ai-one wird dabei als einziger Hersteller mit einer unabhängigen Universalanwendung für Aufgabenübergreifende Lösungen aufgeführt.  Bereits im Frühjahr hatte die Gartner  ai-one als “Cool Vendors 2012” für Kontext Analyse gewählt. Gartner bezeichnet Kontext Analyse als eine der wichtigsten zukünftigen Aufgaben im Rahmen der Business Intelligenz Anwendungen, weil es sowohl strukturierte wie auch unstrukturierte Inhalte analysieren und deren Sinn erkennen kann. Die Art wie ai-one als Leader in der Text Analyse dargestellt wird zeigt deutlich, wie wichtig Gartner dieses Thema bewertet. Gartner unterstützt zudem deutlich die neuen Ansätze in der Textanalyse weil Gartner die Wichtigkeit von intelligenten Werkzeugen deutlich machen möchte, welche über das Benutzen von Tabellen und Modellen herausgeht.

“Sprache ist keine Mathematik.”

Im Unterschied zu den anderen im Report gelisteten Firmen hat ai-one einen neuen Ansatz wie maschinelles Lernen intelligenter und präziser gestaltet werden kann. Der ai-one Ansatz kann Texte in jeder Länge analysieren und erkennt spontan Sinn und Schlagworte. Diese Schlagworte „KeyWords” sind die wichtigsten Worte welche in der Kombination den Sinn in einem Text bestimmen. Weiter erkennt ai-one die Assoziations-Worte welche den Schlagworten den Kontextzusammenhang geben.

“ai-one erkennt die Bedeutungen  selbst in kurzen Texten“.

Textanalyse und Sinnerkennung ist vor allem in kurzen Texten sehr schwierig. In Feeds, Tweet‘s und Facebook stehen oft nur kurze Sätze, welche aber in der Fülle durchaus Sinn machen. Ausser ai-one basieren alle anderen Hersteller in Gartners Report auf Sprachabhängigen Regelsystemen und Umweltmodellen.

ai-one kann selbst einen sehr kurzen Text „Tweet“ analysieren wie: “Will Google eat Apple with the new J2ObjC?” Sofort wird automatisch das Wort ‘Google,’ ‘Apple’ und ‘J2ObjC’als Schlagwort erkannt, sowie die Assoziation ‘eat’ and ‘new‘. die ai-one Technologie lernt spontan die Bedeutung der Worte aus dem Zusammenhang mit anderen Tweet‘s. Je mehr Tweet‘s vorhanden sind zu einem Thema, desto exakter versteht ai-one spontan den Sinn und die Zusammenhänge. Je mehr Tweet‘s umso schlauer wird ai-one. Es ist also keine manuelle Intervention nötig – ai-one lernt schneller und bessert je mehr Inhalt vorhanden ist. Man kann sagen, ai-one’s Technologie lernt wie ein Mensch. Sie erkennt den Sinn und die Bedeutungen aus dem Zusammenhang in denen die einzelnen Worte verwendet werden. Darüber hinaus ist ai-one in der Lage verschachtelte Konzepte aus zusammenhängenden Assoziationen zu erkennen.

Im Unterschied zu den andren im Report gelisteten Firmen hat ai-one einen neuen Ansatz wie maschinelles Lernen intelligenter und präziser gestaltet werden kann. Der ai-one Ansatz kann Texte in jeder Länge analysieren und erkennt spontan Sinn und Schlagworte. Diese Schlagworte „KeyWords” sind die wichtigsten Worte welche in der Kombination den Sinn in einem Text bestimmen. Weiter erkennt ai-one die Assoziations-Worte welche den Schlagworten den Kontextzusammenhang geben. ai-one kann selbst einen sehr kurzen Text „Tweet“ analysieren wie: “Will Google eat Apple with the new J2ObjC?” Sofort wird automatisch das Wort ‘Google,’ ‘Apple’ und ‘J2ObjC’als Schlagwort erkannt, sowie die Assoziation ‘eat’ and ‘new‘. Die ai-one Technologie lernt spontan die Bedeutung der Worte aus dem Zusammenhang mit anderen Tweet‘s. Je mehr Tweet‘s vorhanden sind zu einem Thema, desto exakter versteht NathanCore spontan den Sinn und die Zusammenhänge. Je mehr Tweet‘s umso schlauer wird NathanCore. Es ist also keine manuelle Intervention nötig – Nathan lernt schneller und bessert je mehr Inhalt vorhanden ist. Man kann sagen, ai-one’s NathanCore lernt wie ein Mensch. Er erkennt den Sinn und die Bedeutungen aus dem Zusammenhang in denen die einzelnen Worte verwendet werden. Darüber hinaus ist NathanCore in der Lage verschachtelte Konzepte aus zusammenhängenden Assoziationen zu erkennen.

ai-one denkt anders

ai-one verfolgt einen radikal anderen Ansatz als die model- und regelbasierten Systeme welche IBM, SAS oder SAP anwenden. Bayesian und die Statistischen Ansätze können zwar Muster erkennen, benötigen aber immer Modelle und statische Regelwerke.  ai-one’s Nathan findet die inhärente (innewohnenden) Beziehungen und Bedeutungen aus dem Text, weil es die semantischen Verbindungen und assoziativen Bedeutungen erkennt.  Ontologien oder Thesauri, sowie NLP dienen ai-one als Ergänzung und Verfeinerung der Deutungen. Vor allem dann wenn der Text selber in ungenügender Qualität vorliegt. Der ai-one Core ist absolut Sprachunabhängig.

Vorschau auf das Debüt von NathanApp

Der Gartner Report wurde im Juni 2012 evaluiert und ist somit schon fast wieder überholt. ai-one’s damals untersuchtes Topic-Mapper SDK/API ist in der Zwischenzeit mit NathanCore ersetzt worden. ai-one veröffentlicht in Kürze die neue Generation NathanApp, NathanNode & NathanCore. NathanApp & NathanNode sind REST Services als Komplettlösung. NathanCore ist die Basistechnologie in welcher Kunden ihre eigenen Lösungen und Infrastrukturen bauen können. ai-one offeriert zusätzlich open source Infrastruktur mit REST, JSON. Die neuen Versionen werden bald über Pressemitteilung bekannt gemacht. Wir dürfen allerdings schon jetzt verkünden, dass das ai-one Team mehr als 2 Jahre investiert hat, um die Technologie grundlegend zu erweitern damit sie in den neuen Systemarchitekturen (z.B. Cloud) optimal eingesetzt werden kann. Wir sind stolz über die Gartner Bewertungen. Nathan App ist der erste intelligente Agent von ai-one welcher durch jeden Entwickler einfach und mit wenigen Klicks in eine Lösung integriert werden kann. Das ist ai-one’s Beitrag zu einem “smarter planet.”

Posted by: Olin Hyde