Archive for the ‘Intelligent Agents’ Category

Personal AI Helps Convert Social CRM for Recruiting

Thursday, June 26th, 2014

Given the need for more effective content marketing and better quality lead generation, why aren’t the tools better?  Certainly there are lots of applications, SaaS products and services available for all parts of the marketing and sales process.   With BrainBrowser we provide a tool that can understand the content from marketing and match it to bloggers, LinkedIn connections, Twitter followers and find candidates in places you would never look.

Since about one-third of the 7,500+ queries by our testers were using BrainBrowser to search for people, a key objective is to add features to manage the results and integrate them into your workflow.  If you find someone relevant to your work or a potential recruit, you should be able to connect with them right from the list, follow them on Twitter or share lists of candidates with collaborators.

BrainBrowser with Nimble Popup

As a recruiting professional your task is to find the candidates and conversations on the web where conversions will be maximized and get there first.  BrainBrowser does this for you, creating a list of people, companies and sites that match the content of your position and company description.

As a sales professional, you want to use content, either from your marketing department or content you find and create on your own, to engage your network and to identify the people that are talking about and responsible for buying/influencing a purchase.

In our research (using BrainBrowser) we discovered Nimble and a new category of Social CRM vendors with applications driving social selling (check out Gerry Moran’s post for background on content and social selling).  We were immediately hooked and started using Nimble as our company CRM but quickly found it worked well for managing lists of candidates.

Nimble, a new social CRM application, has made integration easy and I’m recommending it to everyone.  All you need to do is sign up for the trial (its only $15 per month if you like it) and install the plug in in your Chrome browser.  You’ll then be able to highlight the name of the person on the list in BrainBrowser, right click, select the Nimble Search and a popup will display the person’s social media pages in LinkedIn, Twitter, Google+ etc.  Click Save and you’ve added them to your Nimble Contacts where you can then view their social media messages, profile and decide whether to connect or follow.   Tag them and you’ve creating a recruiting hot list you can track in Nimble.

Here’s a video clip I tweeted to CEO Jon Ferrara demonstrating how/why we love it.  This was in response to his video clip to Larry Nipon following up on my referral.

Let me know how you like it.  They do a great job but if you have any questions on the difference between CRM and Social CRM, and how we’re using it for recruiting.  Be sure to add @ai_one or @tom_semantic if you tweet about this and sign up to request a login for BrainBrowser.

As of today, there are only 22 slots left for FREE registrations under the Alpha test program.  Participation gets you a year free on the platform.  Email or tweet @tom_semantic to sign up.

Context, Graphs and the Future of Computing

Friday, June 20th, 2014

Robert Scoble and Shel Israel’s latest book, Age of Context, is a survey of the contributions across the globe to the forces influencing technology and our lives today.  The five forces are mobile, social media, data, sensors and location.  Scoble calls these the five forces of context and harnessed, they are the future of computing.

Pete Mortensen also addressed context in his brilliant May 2013 article in Fast Company “The Future of Technology Isn’t Mobile, It’s Contextual.”   So why is context so important (and difficult)?  First, context is fundamental to our ability to understand the text we’re reading and the world we live in.  In semantics, there is the meaning of the words in the sentence, the context of the page, chapter, book and prior works or conversations, but also the context the reader’s education and experience add to the understanding.  As a computing problem, this is the domain of text analytics.

Second, if you broaden the discussion as Mortensen does to personal intelligent agents (Siri, Google Now), the bigger challenge is complexity.  Inability to understand context has always made it difficult for computers and people to work together.  People and the language we use to describe our world is complex, not mathematical, You can’t be reduced to a formula or rule set, no matter how much data is crunched. Mortensen argues (and we agree) that the five forces are finally giving computers the foundational information needed to understand “your context” and that context is expressed in four data graphs.  These data graphs are

  • Social (friends, family and colleagues),
  • Interest (likes & purchases),
  • Behavior (what you do & where) and
  • Personal (beliefs & values).

While Google Glass might be the poster child of a contextual UX, ai-one has the technology to power these experiences by extracting Mortensen’s graphs from the volumes of complex data generated by each of us through our use of digital devices and interaction with increasing numbers of sensors known as the Internet of Things (IoT).  The Nathan API is already being used to process and store unstructured text and deliver a representation of that knowledge in the form of a graph.  This approach is being used today in our BrainDocs product for eDiscovery and compliance.

Age of Context by Scoble and IsraelIn Age of Context, ai-one is pleased to be recognized as a new technology addressing the demands of these new types of data.  The data and the applications that use them are no longer stored in silos where only domain experts can access them.  With Nathan the data space learns from the content, delivering a more relevant contextual response to applications in real time with user interfaces that are multi-sensory, human and intuitive.

We provide developers this new capability in a RESTful API. In addition to extracting graphs from user data, they can build biologically inspired intelligent agents they can train and embed in intelligent architectures.   Our new Nathan is enriched with NLP in a new Python middleware that allows us to reach more OEM developers.  Running in the cloud and integrated with big data sources and ecosystems of existing APIs and applications, developers can quickly create and test new applications or add intelligence to old ones.

For end users, the Analyst Toolbox (BrainBrowser and BrainDocs) demonstrates the value proposition of our new form of artificial intelligence and shows developers how Nathan can be used with other technologies to solve language problems.  While we will continue to roll out new features to this SaaS offering for researchers, marketers, government and compliance professionals, the APIs driving the applications will be available to developers.

Mortensen closes, “Within a decade, contextual computing will be the dominant paradigm in technology.”  But how?  That’s where ai-one delivers.  In coming posts we will discuss some of the intelligent architectures built with the Nathan API.

ai-one named Finalist in SDBJ Innovation Awards for 2013

Thursday, June 27th, 2013

At the San Diego Business Journal Annual Innovation Award event, ai-one was named a finalist in the technology category. The award was presented at the prestigious event on June 18th at Scripps, attended by several hundred leaders in San Diego’s tech, medical, software and telecom industries. ai-one received the award for its leading edge technology in machine learning and content analytics, as evidenced by the release this year of the new Nathan API for deep learning applications.

The award was accepted by ai-one COO Tom Marsh and partner for defense and intelligence, Steve Dufour, CEO of ISC Consulting of Arizona.

Tom Marsh & Steve Dufour at SDBJ Innovation Awards

Tom Marsh & Steve Dufour at SDBJ Innovation Awards

Ai-one’s Artificial Brain’ Has a Real Eye for Data SDBJ

TECH: Software Can Dig Through and Decipher Information

Software writer ai-one Inc. doesn’t just promise code. The company promises to pull new perspectives and second opinions from seemingly inscrutable data.

SDDT recognizes ai-one’s presentation at CommNexus to SK Telecom of North Korea

Thursday, June 27th, 2013

ai-one was recognized for its participation in the CommNexus MarketLink event June 4th in San Diego California. The event featured companies from all across the US selected by SK Telecom for their potential to add value to SK Telecom’s network. The meeting was also attended by SK’s venture group based in Silicon Valley.
 
Tierney Plumb of the San Diego Daily Transcript reported, “San Diego-based ai-one inc. pitched its offerings Tuesday to the mobile operator. The company, which has discovered a form of biologically inspired neural computing that processes language and learns the way the brain does, was looking for two investments — each about $3 million — from SK. One is a next-generation Deep Personalization Project whose goal is to create an intimate personal agent while providing the user with total privacy control. ”
 
For the full text of this article click  San Diego Source _ Technology _ Startups line up to meet with SK Telecom

Collaboration, Artificial Intelligence and Creativity

Thursday, April 4th, 2013

We are thrilled to publish this guest blog by Dan Faggella – a writer with a focus on the future of consciousness and technology. ai-one met Dan online through his interest in the beneficial developments of human and volitional (sentient) potential.  Dan is national martial arts champion in Brazilian Jiu Jitsu and Masters graduate from the prestigious Positive Psychology program at the University of Pennsylvania. His eclectic writings and interviews with philosophers and technology experts can be found online at www.SentientPotential.com

Artificial Intelligence as a Source for Collaboration

At a recent copywriting event in Las Vegas, I heard a nationally renown writer of sales letters and magazine ads mention something that resonated with me. He said that copywriters are generally isolated people who like to work at him on a laptop, not in a big room with other people, or in a cubicle in an office – but that some of the absolute best ad agencies were getting their best results by “forcing” (in his words) their best copywriters to work together on important pitches and sales letters – delivering a better product than any of them could have alone.

Some people in the crowd seemed surprised, and the copywriter on stage mentioned that many “a-list” copywriters tend to think that their creativity and effectiveness will be stifled by the pandering to the needs of other writers, or arguing over methods and approaches to writing. In my opinion, however, this notion of the “genius of one” is on the way out, even in fields where creativity rules.

If we take the example of sports, the need for feedback and collaboration is for some reason more obvious. A professional football team does not have one genius coach, they have offensive, defensive, and head coaches with teams of assistant coaches. In addition, top athletes from basketball to wrestling to soccer are usually eager to play with and against a variety of teammates and opponents in order to broaden their skills and test their game in new ways. The textbooks on the development of expertise are full of examples from the world of sport; especially pertaining to feedback, coaching, and breaking from insularity.

The focus of my graduate studies at UPENN was in the domain of skill development, where the terms “feedback” (perspective and advice from experts outside oneself) and “insularity” (a limited scope of perspective based on an inability or unwillingness to seek out or take in the perspective of other experts) are common. In sport, insularity is clearly seen as negative. However, in literature or philosophy, it seems that the “genius of one” still seems to reign.

Why might this be the case, when in so many other fields (chess, sports, business, etc…) we se collaboration proliferated? I believe that the answer to this question lies partially in the individual nature of these fields, but that new approaches in collaboration – and particularly new applications of artificial intelligence – will eventually break down the insularity in these and many other “creative” fields.

What is Creativity & Collaboration All About, Anyway?

Creativity, in short, is the ability to create, or to bend rules and convention in order to achieve an end. Collaboration is working jointly on a project. Both, in my mind, imply the application of more intelligence to a particular problem.

Just as three top copywriters can put together a better sales letter (generally) than one copywriter, three top chess players are more likely to defeat a computer chess program (generally) than one top chess player alone.

Technology allows us to bring more to bare when it comes to applying intelligence. Even in the relatively simple task of putting together this article, I am able to delete, reorganize, link, and research thanks to my laptop and the internet. I bring more than my brain and a pen on paper could do alone. I may not be “collaborating,” but I am applying and the information and research of others to my own work in real time.

Artificial intelligence ads an entirely new level of “applied intelligence” to projects that may extend beyond what internet research and human collaboration could ever achieve. For our purposes today, the progression of “less” to “more” applied intelligence will be: working alone, working with others, working with others and researching online, and applying artificial intelligence. We already have tremendous evidence of this today in a vast number of fields.

Applications Already Underway

I will argue that, in general, collaboration and the application of artificial intelligence will be prevalent in a field based primarily on: the competitiveness of that field (in sports and business, for instance, competition is constant, and so testing and evaluating can be constant), popularity / perceived importance of the field (trivial matters rarely hold the attention of groups of smart people, and are even less likely to garner grants or resources), and the lucrative-ness of that field (such as finance).

In finance, for example, the highly competitive, the highly lucrative and high-speed work of number-crunching and pattern-recognition has been one of the most prominent domains of AI’s applications. Not only are human decisions bolstered by amazingly complex real-time data, but many “decisions” are no longer made by humans at all, but are completely or mostly automated based on streaming data and making sense of patterns. It is estimated that nearly 50% of all trades in American and European markets are made automatically – and are likely to increase.

Anyone who’s visited Amazon.com, Google, or Facebook knows that advertisements or promoted products are calibrated specifically to each user. This is not done by a team of guessing humans, individually testing ads and success rates, but is performed by intelligent, learning algorithms that use massive amounts of data from massive numbers of users (including data from off of their own sites) to present the advertisements or products more likely to generate sales.

The above applications seem like obvious first applications of the expensive technologies of AI because of the amount of money involved, and the necessity for businesses to stay ahead in a competitive marketplace (generating maximum revenue, giving customers offers that they want, etc…). Implications have already been seen in sports, with companies like Automatic Insights providing intelligent sports data and statistics in regular, human language in real time. My guess is that in the big-money world of professional sport, even this kind of advanced reporting will only be the very tip of the iceberg.

However, the implications will soon also reverberate into the worlds of more “complex” systems of meaning, as well as fields where the economic ramifications are less certain. I believe that the humanities (poetry, literature, philosophy) will see a massive surge of applied intelligence that will not only break the mold of the “genius of one,” but will also open doors to all of the future possibilities of AIs contributing to “creative” endeavors.

Future Implications of AI in “Creative” Fields / The Humanities

It seems perfectly reasonable that more applications for AI have been found in the domain of finance than in the domain of philosophy or literature. Finance involves numbers and patterns, while literature involves more complex and arbitrary ideas of “meaning” and a system of much more complicated symbols.

However, I must say that I am altogether surprised with the fact that there seems to be very little application of AI to the domain of the humanities. In part, I believe this to be a problem of applying AI to complex matters of “meaning” and subjective standards of writing quality (there is not clear “bottom line” as there is in finance), but the notion of the “genius of one” invariably plays a part in this trend as well, as even collaboration among humans (never mind collaboration with an AI) is often comparatively limited in these fields.

Not being an novelist, I can hardly say that if writers collaborated with other expert writers more often, they would create “better” overall works. I have an inkling, however, that this might be the case.

In the world of psychology, I believe that outside the desire to “hog the glory,” expert researchers would almost certainly take on the opportunity to collaborate on their most important projects with other expert researchers in the field. In the world of flowing data streams, applying AI and statistical models might also seem more applicable.

In philosophy – where works are generally still seen to be completed by lone, pensive thinkers in dark, pensive rooms – I believe that collaboration and AI will eventually transcend the “genius of one,” and rid us of the notion that the best work is done by solo minds.

If one philosopher spent 12 months aiming to compare and find connections between the ethics of Aristotle and Epictetus, I would argue that 12 very smart philosophers working together for 12 months might achieve much more insight.

Similarly, if intelligent algorithms could be created that could detect commonalities in terms, symbols, and meanings – entirely new connections and insights might be made possible, and much more vast reams of philosophical text could be analyzed in a much more uniform fashion – producing an objective perspective completely unattainable to human beings without an AI aide. I believe that this is already possible, though it’s applications in philosophy and the humanities in general seem almost nonexistent outside of a few events and experiments.

I believe very much in the power of the individual mind, and mean no disrespect to human capacity or to individual thinkers when I say that the era of the “genius of one” is going to progressively evaporate. In 1920, you might be able to win the Nobel Prize in your 40’s with a small team of researchers. In 2020, you’re more likely to win the Nobel Prize in your 60’s with a global research team that’s been hard at work for decades. Even the more “creative” domains of the humanities will experience a similar shift as collaboration becomes more common, research becomes more simple, and intelligence becomes more and more prevalent and nuanced.

Conclusion: Robot Shakespeare?

It is interesting to pose that at some point – potentially within this century, the best prose, the best novels, and the best philosophical insight will come not from individual geniuses, not even from teams of researchers, but almost entirely from AI.

This is not to say that I believe a “robot Shakespeare” will be in our midst anytime soon – but rather that we aught keep our minds open to the idea of AI being something other than calculators and cars that drive themselves. The nuanced connections of meaning can already be used to supplement human efforts with insights in so many domains, an in a period of 20, 40, or 60 years, we may see all elements of human capacity (not just statistical number-crunching) enhanced a billion-fold by AI’s of the future.

The ethical, political, and other implications aside, let us keep our eyes open for the implications of applied intelligence across all fields of human endeavor. We may question technology’s ability to contribute, but remember that it was less than 70 years between the early flights of the Wright brothers and landing on the moon. Might we seem a similar time frame between the advent of Amazon’s intelligent product offers and the replacement of humans at the helm of creative endeavor in writing, philosophy, poetry, and beyond. Only time will tell.

Thinking forward,

-Daniel Faggella

Big Data Solutions: Intelligent Agents Find Meaning of Text

Friday, January 18th, 2013

 

ai-BrainDocs AgentWhat if your computer could find ideas in documents? Building on the idea of fingerprinting documents, ai-one helped develop ai-BrainDocs – a tool to mine large sets of documents to find ideas using intelligent agents. This solves a big problem for knowledge workers: How to find ideas in documents that are missed by traditional keyword search tools (such as Google, Lucine, Solr, FAST, etc.).

Customers Struggle with Unstructured Text

Almost every organization struggles to find value in “big data” – especially ideas buried within unstructured text. Often a very limited set of vocabulary can be used to express very different ideas. Lawyers are particularly talented at this: They can use 100 unique words to express thousands of ideas by simply changing the ordering and frequencies of the words.

Lawyers are not the only ones that need to find ideas inside documents. Other use cases include finding and classifying complaints, identifying concepts within social media feeds such as Twitter or Facebook and mining PubMed find related research articles. Recently, we have had several healthcare companies contact us to mine electronic health records (EHR) data to find information that is buried within doctors notes so they can predict adverse reactions, find co-morbidity risks and detect fraud.

The common denominator for all these uses cases is simple: How to find “what matters most” in documents? They need a way to find these ideas fast enough to keep pace with the growth in documents. Given that information is growing at almost 20% per year – this means that a very big problem now will be enormous next year.

Problems with Current Approaches

We’ve heard numerous stories from customers who were frustrated at the cost, complexity and expertise required to implement solutions to enable machines to read and understand the meaning of free-form text. Often these solutions use latent semantic indexing (LSI) and latent Dirichlet allocation (LDA). In one case, a customer spent more than two years trying to combine LSI with a Microsoft FAST Enterprise search appliance running on SharePoint. It failed because they were searching a high-volume of legal documents with very low variability. They were searching legal contracts to find paragraphs that included a very specific legal concept that could be expressed with many different combinations of words. Keyword search failed because the legal concept used commonly used words. LSI and LDA failed because the systems required a very large training set – often involving hundreds of documents. Even after reducing the specificity requirements, LSI and LDA still failed because they could not find the legal ideas at the paragraph level.

Inspiration

We found inspiration in the complaints we heard from customers: What if we could build an “intelligent agent” that could read documents like a person? We thought of the agent as an entry-level staff person who could be taught with a few examples then highlight paragraphs that were similar to (but not exactly like) the teaching examples.

Solution: Building Intelligent Agents

For several months, we have been developing prototypes of intelligent agents to mine unstructured text to find meaning. We built a Java application that combine ai-one’s machine learning API with natural language processing (OpenNLP) and NoSQL databases (MongoDB). Our approach generates an “ai-Fingerprint” that is a representational model of a document using keywords and association words. The “ai-Fingerprint” is similar to a graph G[V,E] where G is the knowledge representation, V (vertices) are keywords, and E (edges) are associations. This can also be thought of as a topic model.

ai-FingerprintThe ai-Fingerprint can be generated for almost any size text – from sentences to entire libraries of documents. As you might expect, the “intelligence” (or richness) of the ai-Fingerprint is proportional to the size of text it represents. Very sparse text (such as a tweet) has very little meaning. Large texts, such as legal documents, are very rich. This approach to topic modelling is precise — even without training or using external ontologies.

[NOTE: We are experimenting with using ontologies (such as OWL and RDF) as a way to enrich ai-Fingerprints with more intelligence. We are eager to find customers who want to build prototypes using this approach.]

The Secret Sauce

The magic is that ai-one’s API automatically detects keywords and associations – so it learns faster, with fewer documents and provides a more precise solution than mainstream machine learning methods using latent semantic analysis. Moreover, using ai-one’s approach makes it relatively easy for almost any developer to build intelligent agents.

How to Build Intelligent Agents?

To build an intelligent agent, we first had to consider how a human reads and understands a document.

The Human Perspective

Human are very good at detecting ideas – regardless of the words used to express them. As mentioned above, lawyers can express dozens of completely different legal concepts with a vocabulary of just a few hundred words. Humans can recognize the subtle differences of two paragraphs by how a lawyer uses words – both in meaning (semantics) and structure (syntax). Part of the cleverness of a lawyer is finding ways to combine as few words as possible to express a very precise idea to accomplish a specific legal or business objective. In legal documents, each new idea is almost always expressed in a paragraph. So two paragraphs might have the exact same words but express completely different ideas.

To find these ideas, a person (or computer) must detect the patterns of word use – similar to the finding a pattern in a signal. For example, as a child I knew I was in trouble when my mother called me by my first and last name – the combination of these words created a “signal” that was different than when she just used my first name. Similarly, a legal concept has a different meaning if two words occur together, such as “written consent” than if it only uses the word “consent.”

The (Conventional) Machine Learning Perspective

It’s almost impossible to program a computer to find such “faint signals” within a large number of documents. To do so would require a computer to be programmed to find all possible combinations of words for a given idea to search and match.

Machine learning technologies enable computers to identify features within the data to detect patterns. The computer “learns” by recognizing the combinations of features as patterns.

[There are many forms of machine learning – so I will keep focused only on those related to our text analytics problem.]

Natural Language Processing

One of the most important forms of machine learning for text analytics is natural language processing (NLP). NLP tools are very good at codifying the rules of language for computers to detect linguistic features – such as parts of speech, named entities, etc.

However (at the time of this writing), most NLP systems can’t detect patterns unless they are explicitly programmed or trained to do so. Linguistic patterns are very domain specific. The language used in medicine is different than what is used in law, etc. Thus, NLP is not easily generalized. NLP only works in specific situations where there is predictable syntax, semantics and context. IBM Watson can play Jeopardy! but has had tremendous problems finding commercial applications in marketing or medical records processing. Very few organizations have the budget or expertise to train NLP systems. They are left to either buy an off-the-shelf solution (such as StoredIQ ) or hire a team of PhDs to modify one of the open-source NLP tools. Good luck.

Latent Analysis Techniques

Tools such as latent semantic analysis (LSA), latent semantic indexing (LSI) and latent Dirichlet allocation (LDA) are all capable of detecting patterns within language. However, they require tremendous expertise to implement and often require large numbers of training documents. LSA and LSI are computationally expensive because they must recalculate the relationships between features each time they are given something new to learn. Thus, learning the meaning of the 1,001th document requires a calculation across the 1,000 previously learned documents. LSA uses a statistical approach called single variable decomposition to isolate keywords. Unlike LSA, ai-one’s technology also detects the association words that give a keyword context.

Similar to our ai-Fingerprint approach, LDA uses a graphical model for topic discovery. However, it takes tremendous skill to develop applications using LDA. Even when implemented, it requires the user to make informed guesses about the nature of the text. Unlike LDA, ai-one’s technology can be learned in a few hours. It requires no supervision or human interaction. It simply detects the inherent semantic value of text – regardless of language.

Our First Intelligent Agent Prototype: ai-BrainDocs

It took our team about a month to build the initial version of ai-BrainDocs. Our team used ai-one’s keyword and association commands to generate a graph for each document. This graph goes into MongoDB as a JSON object that represents the knowledge (content) of each document.
Next we created an easy way to build intelligent agents. We simply provide the API with examples of concepts we want to find. This training set can be very short. For one type of legal contracts, it only took 4 examples of text for the intelligent agent to achieve 90% accuracy in finding similar concepts.

Unlike solutions that use LSI, LDA and other technologies, the intelligent agents in ai-BrainDocs finds ideas at the paragraph level. This is a huge advantage when looking at large documents – such as medical research or SEC filings.

Next we built an interface that allows the end-user to control the intelligent agents by setting thresholds for sensitivity and determining how many paragraphs to scan at a time.

Our first customers are now testing ai-BrainDocs – and so far they love it. We expect to learn a lot as more people use the tool for different purposes. We are looking forward to developing ways for intelligent agents to interact – just like people – by comparing what they find within documents. We are finding that it is best for each agent to specialize in a specific subject. So finding ways for agents to compare their results using Boolean operators enables them to find similarities and differences between documents.

One thing is clear: Intelligent agents are ideal for mining unstructured text to find small ideas hidden in big data.

We look forward to reporting more on our work with ai-BrainDocs soon.

Posted by: Olin Hyde