Archive for the ‘Intelligent Agents’ Category

ai-one’s Biologically Inspired Neural Network

Sunday, February 1st, 2015

ai-one’s Learning Algorithm: Biologically Inspired Neural Network
– Introduction to HSDS vs ANN in Text Applications

Unlike any of the traditional neural nets, the neural network based on ai-one, the HoloSemantic Data Space neural network (invented by Manfred Hoffleisch) or in short “HSDS”, are massively connected, asymmetrical graphs which are stimulated by binary spikes. HSDS do not have any neural structures pre-defined by the user. Their building blocks resemble biological neural networks: a neuron has dendrites, on which the synapses from other neurons are placed, and an axon which ends in synapses at other neurons.

The connections between the neurons emerge in an unsupervised manner while the learning input is translated into the neural graph structure. The resulting graph can be queried by means of specific stimulations of neurons. In traditional neural systems it is necessary to set up the appropriate network structure at the beginning according to what is to be learned. Moreover, the supervised learning employed by neural nets such as the perceptron requires that a teacher be present who answers specific questions. Even neural nets that employ unsupervised learning (like those of Hopfield and Kohonen) require a neighborhood function adapted to the learning issue. In contrast, HSDS require neither a teacher nor a predefined structure or neighborhood function (note that although a teacher is not required, in most applications programmatic teaching is used to insure the HSDS has learned the content needed to meet performance requirements). In the following we characterize HSDS according to their most prominent features.

Exploitation of context

In ai-one applications like BrainDocs, HSDS is used for the learning of associative networks and feature extraction. The learning input consists of documents from the application domains, which are broken down into segments rather than entered whole: all sentences may be submitted as is or segmented into sub-sentences according to grammatical markers. By way of experimenting, we have discovered that a segment should ideally consist of 7 to 8 words. This is in line with findings from cognitive psychology. Breaking down text documents into sub-sentences is the closest possible approximation to the ideal segment size. The contexts given by the sub-sentence segments help the system learn. The transitivity of term co-occurrences from the various input contexts (i.e. segments) are a crucial contribution to creating appropriate associations. This can be compared with the higher-order co-occurrences explored in the context of latent semantic indexing.

Continuously evolving structure
The neural structure of a HSDS is dynamic and changes constantly in line with neural operations. In the neural context, change means that new neurons are produced or destroyed and connections reinforced or inhibited. Connections that are not used in the processing of input into the net for some time will get gradually weaker. This effect can also be applied to querying, which then results in the weakening of connections that are rarely traversed for answering a query.

Asymmetric connections
The connections between the neurons need not be equally strong on both sides and it is not necessary that a connection should exist between all the neurons (cp. Hopfield’s correlation matrix).

Spiking neurons
The HSDS is stimulated by spikes, i.e. binary signals which either fire or do not. Thresholds do not play a role in HSDS. The stimulus directed at a neuron is coded by the sequence of spikes that arrive at the dendrite.

Massive connectivity
Whenever a new input document is processed, new (groups of) neurons are created which in turn stimulate the network by sending out a spike. Some of the neurons reached by the stimulus react and develop new connections, whereas others, which are less strongly connected, do not. The latter nevertheless contribute to the overall connectivity because they make it possible to reach neurons which could not otherwise be reached. Given the high degree of connectivity, a spike can pass through a neuron several times since it can be reached via several paths. The frequency and the chronological sequence in which this happens determine the information that is read from the net

General purpose
There is no need to define a topology before starting the learning process because the neural structure of the HSDS develops on its own. This is why it is possible to retrieve a wide range of information by means of different stimulation patterns. For example, direct associations or association chains between words can be found, the words most strongly associated with a particular word can be identified, etc.

Rumsfeld Conundrum- Finding the Unknown Unknown

Tuesday, January 27th, 2015

Since we began the process of building applications using our AI engine, we have been focused on working with ideas or concepts. With BrainDocs we built intelligent agents to find and score similarity for ideas in paragraphs, but still fell short of the vision we have for our solution. Missing was an intuitive and visual UI to explore content interactively using multiple concepts and  metadata (like dates, locations, etc). We want to give our users the power to create a rich and personal context to power through their research. What do I call this?

Some Google research led me to a great visualization and blog by David McCandless on the Taxonomy of Ideas. While the words in his viz are attributes of ideas, not the ideas themselves, it got me thinking in different ways about the problem.

Taxonomy of Ideas

If you substitute an idea (product or problem) in David’s matrix and add the dimension of time, you create a useful framework. If the idea above was “car”, then the top right might be Tesla and bottom left a Yugo (remember those?). Narrow the definition to “electric car” or generalize to “eco-friendly personal transportation” and the matrix changes. But insert an unsolved problem and now you have trouble applying the attributes. You also arrive at an innovator’s dilemma (not the seminal book by Clayton Christensen), the challenge of researching something that hasn’t been labeled and categorized yet.

Ideas begin in someone’s head. With research, debate, and engineering, they become products. Products have labels and categories that facilitate communication, search and commerce. The challenge for idea search on future problems is that the opposite occurs: products are not yet ideas and the problems they solve may not have been defined yet. If I may, Donald Rumsfeld nailed the problem with this famous quote:

“There are known knowns. These are things we know that we know. There are known unknowns. That is to say, there are things that we know we don’t know. But there are also unknown unknowns. There are things we don’t know we don’t know.”

And if it’s an unknown unknown, it certainly hasn’t been labeled yet so how do you search for it? Our CEO Walt Diggelmann used to say it this way, “ai-one gives you an answer to a question, you did not know that you have to ask….!

Innovators work in this whitespace.

If you could build and combine different intelligent (idea) agents for problems as easily as you test different combinations of words in a search box, you could drive an interactive and spontaneous exploration of ideas. In some ways this is the gift of our intelligence. New ideas and innovation are in great part combinatorial, collaborative and stimulated by bringing together seemingly unrelated knowledge to find new solutions.

Instead of pumping everything into your brain (or an AI) and hoping the ideas pop out, we want to give you the ability to mix combinations of brains, add goals and constraints and see what you can create. Matt Ridley termed this “ideas having sex”. This is our goal for Topic-Mapper (not the sex part).

So what better place to apply this approach than to the exploration of space? NASA already created a “taxonomy of ideas” for the missions of the next few decades. In my next blog I’ll describe the demo we’re working on for the grandest of the grand challenges, human space exploration.


AI, AGI, ASI, Deep Learning, Intelligent Machines.. Should you worry?

Saturday, January 17th, 2015

If the real life Tony Stark and technology golden boy, Elon Musk, is worried that AI is an existential threat to humanity, are we doomed? Can mere mortals do anything about this when the issue is cloaked in dozens of buzzwords and the primary voices on the subject are evangelists with 180 IQs from Singularity University? Fortunately, you can get smart and challenge them without a degree in AI from MIT.

There are good books on the subject. I like James Barrat’s Our Final Invention and while alarmist, it is thorough and provides a guide to a number of resources from both sides of the argument. One of those was the Machine Intelligence Research Institute (MIRI) founded by Eliezer Yudkowsky. This book was recommended on the MIRI website and is a good primer on the subject.

Smarter Than Us by Stuart ArmstrongSmarter Than Us – The Rise of Machine Intelligence by Stuart Armstrong can also be downloaded at iTunes.

“It will sharpen your focus to see AI from a different view. The book does not provide a manual for Friendly AI, but its shows the problems and it points to the 3 critical things needed. We are evaluating the best way for ai-one to participate in the years ahead.” Walt Diggelmann, CEO ai-one.

In Chapter 11 Armstrong recommends we take an active role in the future development and deployment of AI, AGI and ASI. The developments are coming; the challenge is to make sure AI plays a positive role for everyone. A short summary:

“That’s Where You Come In . . .

There are three things needed—three little things that will make an AI future bright and full of meaning and joy, rather than dark, dismal, and empty. They are research, funds, and awareness.

Research is the most obvious.
A tremendous amount of good research has been accomplished by a very small number of people over the course of the last few years—but so much more remains to be done. And every step we take toward safe AI highlights just how long the road will be and how much more we need to know, to analyze, to test, and to implement.

Moreover, it’s a race. Plans for safe AI must be developed before the first dangerous AI is created.
The software industry is worth many billions of dollars, and much effort (and government/defense money) is being devoted to new AI technologies. Plans to slow down this rate of development seem unrealistic. So we have to race toward the distant destination of safe AI and get there fast, outrunning the progress of the computer industry.

Funds are the magical ingredient that will make all of this needed research.
In applied philosophy, ethics, AI itself, and implementing all these results—a reality. Consider donating to the Machine Intelligence Research Institute (MIRI), the Future of Humanity Institute (FHI), or the Center for the Study of Existential Risk (CSER). These organizations are focused on the right research problems. Additional researchers are ready for hire. Projects are sitting on the drawing board. All they lack is the necessary funding. How long can we afford to postpone these research efforts before time runs out? “

About Stuart: “After a misspent youth doing mathematical and medical research, Stuart Armstrong was blown away by the idea that people would actually pay him to work on the most important problems facing humanity. He hasn’t looked back since, and has been focusing mainly on existential risk, anthropic probability, AI, decision theory, moral uncertainty, and long-term space exploration. He also walks the dog a lot, and was recently involved in the coproduction of the strange intelligent agent that is a human baby.”

Since ai-one is a part of this industry and one of the many companies moving the field forward, there will be many more posts on the different issues confronting AI. We will try to keep you updated and hope you’ll join the conversation on Google+, Facebook, Twitter or LinkedIn. AI is already pervasive and developments toward AGI can be a force for tremendous good. Do we think you should worry? Yes, we think it’s better to lose some sleep now so we don’t lose more than that later.


(originally posted on

Personal AI Helps Convert Social CRM for Recruiting

Thursday, June 26th, 2014

Given the need for more effective content marketing and better quality lead generation, why aren’t the tools better?  Certainly there are lots of applications, SaaS products and services available for all parts of the marketing and sales process.   With BrainBrowser we provide a tool that can understand the content from marketing and match it to bloggers, LinkedIn connections, Twitter followers and find candidates in places you would never look.

Since about one-third of the 7,500+ queries by our testers were using BrainBrowser to search for people, a key objective is to add features to manage the results and integrate them into your workflow.  If you find someone relevant to your work or a potential recruit, you should be able to connect with them right from the list, follow them on Twitter or share lists of candidates with collaborators.

BrainBrowser with Nimble Popup

As a recruiting professional your task is to find the candidates and conversations on the web where conversions will be maximized and get there first.  BrainBrowser does this for you, creating a list of people, companies and sites that match the content of your position and company description.

As a sales professional, you want to use content, either from your marketing department or content you find and create on your own, to engage your network and to identify the people that are talking about and responsible for buying/influencing a purchase.

In our research (using BrainBrowser) we discovered Nimble and a new category of Social CRM vendors with applications driving social selling (check out Gerry Moran’s post for background on content and social selling).  We were immediately hooked and started using Nimble as our company CRM but quickly found it worked well for managing lists of candidates.

Nimble, a new social CRM application, has made integration easy and I’m recommending it to everyone.  All you need to do is sign up for the trial (its only $15 per month if you like it) and install the plug in in your Chrome browser.  You’ll then be able to highlight the name of the person on the list in BrainBrowser, right click, select the Nimble Search and a popup will display the person’s social media pages in LinkedIn, Twitter, Google+ etc.  Click Save and you’ve added them to your Nimble Contacts where you can then view their social media messages, profile and decide whether to connect or follow.   Tag them and you’ve creating a recruiting hot list you can track in Nimble.

Here’s a video clip I tweeted to CEO Jon Ferrara demonstrating how/why we love it.  This was in response to his video clip to Larry Nipon following up on my referral.

Let me know how you like it.  They do a great job but if you have any questions on the difference between CRM and Social CRM, and how we’re using it for recruiting.  Be sure to add @ai_one or @tom_semantic if you tweet about this and sign up to request a login for BrainBrowser.

As of today, there are only 22 slots left for FREE registrations under the Alpha test program.  Participation gets you a year free on the platform.  Email or tweet @tom_semantic to sign up.

Context, Graphs and the Future of Computing

Friday, June 20th, 2014

Robert Scoble and Shel Israel’s latest book, Age of Context, is a survey of the contributions across the globe to the forces influencing technology and our lives today.  The five forces are mobile, social media, data, sensors and location.  Scoble calls these the five forces of context and harnessed, they are the future of computing.

Pete Mortensen also addressed context in his brilliant May 2013 article in Fast Company “The Future of Technology Isn’t Mobile, It’s Contextual.”   So why is context so important (and difficult)?  First, context is fundamental to our ability to understand the text we’re reading and the world we live in.  In semantics, there is the meaning of the words in the sentence, the context of the page, chapter, book and prior works or conversations, but also the context the reader’s education and experience add to the understanding.  As a computing problem, this is the domain of text analytics.

Second, if you broaden the discussion as Mortensen does to personal intelligent agents (Siri, Google Now), the bigger challenge is complexity.  Inability to understand context has always made it difficult for computers and people to work together.  People and the language we use to describe our world is complex, not mathematical, You can’t be reduced to a formula or rule set, no matter how much data is crunched. Mortensen argues (and we agree) that the five forces are finally giving computers the foundational information needed to understand “your context” and that context is expressed in four data graphs.  These data graphs are

  • Social (friends, family and colleagues),
  • Interest (likes & purchases),
  • Behavior (what you do & where) and
  • Personal (beliefs & values).

While Google Glass might be the poster child of a contextual UX, ai-one has the technology to power these experiences by extracting Mortensen’s graphs from the volumes of complex data generated by each of us through our use of digital devices and interaction with increasing numbers of sensors known as the Internet of Things (IoT).  The Nathan API is already being used to process and store unstructured text and deliver a representation of that knowledge in the form of a graph.  This approach is being used today in our BrainDocs product for eDiscovery and compliance.

Age of Context by Scoble and IsraelIn Age of Context, ai-one is pleased to be recognized as a new technology addressing the demands of these new types of data.  The data and the applications that use them are no longer stored in silos where only domain experts can access them.  With Nathan the data space learns from the content, delivering a more relevant contextual response to applications in real time with user interfaces that are multi-sensory, human and intuitive.

We provide developers this new capability in a RESTful API. In addition to extracting graphs from user data, they can build biologically inspired intelligent agents they can train and embed in intelligent architectures.   Our new Nathan is enriched with NLP in a new Python middleware that allows us to reach more OEM developers.  Running in the cloud and integrated with big data sources and ecosystems of existing APIs and applications, developers can quickly create and test new applications or add intelligence to old ones.

For end users, the Analyst Toolbox (BrainBrowser and BrainDocs) demonstrates the value proposition of our new form of artificial intelligence and shows developers how Nathan can be used with other technologies to solve language problems.  While we will continue to roll out new features to this SaaS offering for researchers, marketers, government and compliance professionals, the APIs driving the applications will be available to developers.

Mortensen closes, “Within a decade, contextual computing will be the dominant paradigm in technology.”  But how?  That’s where ai-one delivers.  In coming posts we will discuss some of the intelligent architectures built with the Nathan API.

ai-one named Finalist in SDBJ Innovation Awards for 2013

Thursday, June 27th, 2013

At the San Diego Business Journal Annual Innovation Award event, ai-one was named a finalist in the technology category. The award was presented at the prestigious event on June 18th at Scripps, attended by several hundred leaders in San Diego’s tech, medical, software and telecom industries. ai-one received the award for its leading edge technology in machine learning and content analytics, as evidenced by the release this year of the new Nathan API for deep learning applications.

The award was accepted by ai-one COO Tom Marsh and partner for defense and intelligence, Steve Dufour, CEO of ISC Consulting of Arizona.

Tom Marsh & Steve Dufour at SDBJ Innovation Awards

Tom Marsh & Steve Dufour at SDBJ Innovation Awards

Ai-one’s Artificial Brain’ Has a Real Eye for Data SDBJ

TECH: Software Can Dig Through and Decipher Information

Software writer ai-one Inc. doesn’t just promise code. The company promises to pull new perspectives and second opinions from seemingly inscrutable data.

SDDT recognizes ai-one’s presentation at CommNexus to SK Telecom of North Korea

Thursday, June 27th, 2013

ai-one was recognized for its participation in the CommNexus MarketLink event June 4th in San Diego California. The event featured companies from all across the US selected by SK Telecom for their potential to add value to SK Telecom’s network. The meeting was also attended by SK’s venture group based in Silicon Valley.
Tierney Plumb of the San Diego Daily Transcript reported, “San Diego-based ai-one inc. pitched its offerings Tuesday to the mobile operator. The company, which has discovered a form of biologically inspired neural computing that processes language and learns the way the brain does, was looking for two investments — each about $3 million — from SK. One is a next-generation Deep Personalization Project whose goal is to create an intimate personal agent while providing the user with total privacy control. ”
For the full text of this article click  San Diego Source _ Technology _ Startups line up to meet with SK Telecom

Collaboration, Artificial Intelligence and Creativity

Thursday, April 4th, 2013

We are thrilled to publish this guest blog by Dan Faggella – a writer with a focus on the future of consciousness and technology. ai-one met Dan online through his interest in the beneficial developments of human and volitional (sentient) potential.  Dan is national martial arts champion in Brazilian Jiu Jitsu and Masters graduate from the prestigious Positive Psychology program at the University of Pennsylvania. His eclectic writings and interviews with philosophers and technology experts can be found online at

Artificial Intelligence as a Source for Collaboration

At a recent copywriting event in Las Vegas, I heard a nationally renown writer of sales letters and magazine ads mention something that resonated with me. He said that copywriters are generally isolated people who like to work at him on a laptop, not in a big room with other people, or in a cubicle in an office – but that some of the absolute best ad agencies were getting their best results by “forcing” (in his words) their best copywriters to work together on important pitches and sales letters – delivering a better product than any of them could have alone.

Some people in the crowd seemed surprised, and the copywriter on stage mentioned that many “a-list” copywriters tend to think that their creativity and effectiveness will be stifled by the pandering to the needs of other writers, or arguing over methods and approaches to writing. In my opinion, however, this notion of the “genius of one” is on the way out, even in fields where creativity rules.

If we take the example of sports, the need for feedback and collaboration is for some reason more obvious. A professional football team does not have one genius coach, they have offensive, defensive, and head coaches with teams of assistant coaches. In addition, top athletes from basketball to wrestling to soccer are usually eager to play with and against a variety of teammates and opponents in order to broaden their skills and test their game in new ways. The textbooks on the development of expertise are full of examples from the world of sport; especially pertaining to feedback, coaching, and breaking from insularity.

The focus of my graduate studies at UPENN was in the domain of skill development, where the terms “feedback” (perspective and advice from experts outside oneself) and “insularity” (a limited scope of perspective based on an inability or unwillingness to seek out or take in the perspective of other experts) are common. In sport, insularity is clearly seen as negative. However, in literature or philosophy, it seems that the “genius of one” still seems to reign.

Why might this be the case, when in so many other fields (chess, sports, business, etc…) we se collaboration proliferated? I believe that the answer to this question lies partially in the individual nature of these fields, but that new approaches in collaboration – and particularly new applications of artificial intelligence – will eventually break down the insularity in these and many other “creative” fields.

What is Creativity & Collaboration All About, Anyway?

Creativity, in short, is the ability to create, or to bend rules and convention in order to achieve an end. Collaboration is working jointly on a project. Both, in my mind, imply the application of more intelligence to a particular problem.

Just as three top copywriters can put together a better sales letter (generally) than one copywriter, three top chess players are more likely to defeat a computer chess program (generally) than one top chess player alone.

Technology allows us to bring more to bare when it comes to applying intelligence. Even in the relatively simple task of putting together this article, I am able to delete, reorganize, link, and research thanks to my laptop and the internet. I bring more than my brain and a pen on paper could do alone. I may not be “collaborating,” but I am applying and the information and research of others to my own work in real time.

Artificial intelligence ads an entirely new level of “applied intelligence” to projects that may extend beyond what internet research and human collaboration could ever achieve. For our purposes today, the progression of “less” to “more” applied intelligence will be: working alone, working with others, working with others and researching online, and applying artificial intelligence. We already have tremendous evidence of this today in a vast number of fields.

Applications Already Underway

I will argue that, in general, collaboration and the application of artificial intelligence will be prevalent in a field based primarily on: the competitiveness of that field (in sports and business, for instance, competition is constant, and so testing and evaluating can be constant), popularity / perceived importance of the field (trivial matters rarely hold the attention of groups of smart people, and are even less likely to garner grants or resources), and the lucrative-ness of that field (such as finance).

In finance, for example, the highly competitive, the highly lucrative and high-speed work of number-crunching and pattern-recognition has been one of the most prominent domains of AI’s applications. Not only are human decisions bolstered by amazingly complex real-time data, but many “decisions” are no longer made by humans at all, but are completely or mostly automated based on streaming data and making sense of patterns. It is estimated that nearly 50% of all trades in American and European markets are made automatically – and are likely to increase.

Anyone who’s visited, Google, or Facebook knows that advertisements or promoted products are calibrated specifically to each user. This is not done by a team of guessing humans, individually testing ads and success rates, but is performed by intelligent, learning algorithms that use massive amounts of data from massive numbers of users (including data from off of their own sites) to present the advertisements or products more likely to generate sales.

The above applications seem like obvious first applications of the expensive technologies of AI because of the amount of money involved, and the necessity for businesses to stay ahead in a competitive marketplace (generating maximum revenue, giving customers offers that they want, etc…). Implications have already been seen in sports, with companies like Automatic Insights providing intelligent sports data and statistics in regular, human language in real time. My guess is that in the big-money world of professional sport, even this kind of advanced reporting will only be the very tip of the iceberg.

However, the implications will soon also reverberate into the worlds of more “complex” systems of meaning, as well as fields where the economic ramifications are less certain. I believe that the humanities (poetry, literature, philosophy) will see a massive surge of applied intelligence that will not only break the mold of the “genius of one,” but will also open doors to all of the future possibilities of AIs contributing to “creative” endeavors.

Future Implications of AI in “Creative” Fields / The Humanities

It seems perfectly reasonable that more applications for AI have been found in the domain of finance than in the domain of philosophy or literature. Finance involves numbers and patterns, while literature involves more complex and arbitrary ideas of “meaning” and a system of much more complicated symbols.

However, I must say that I am altogether surprised with the fact that there seems to be very little application of AI to the domain of the humanities. In part, I believe this to be a problem of applying AI to complex matters of “meaning” and subjective standards of writing quality (there is not clear “bottom line” as there is in finance), but the notion of the “genius of one” invariably plays a part in this trend as well, as even collaboration among humans (never mind collaboration with an AI) is often comparatively limited in these fields.

Not being an novelist, I can hardly say that if writers collaborated with other expert writers more often, they would create “better” overall works. I have an inkling, however, that this might be the case.

In the world of psychology, I believe that outside the desire to “hog the glory,” expert researchers would almost certainly take on the opportunity to collaborate on their most important projects with other expert researchers in the field. In the world of flowing data streams, applying AI and statistical models might also seem more applicable.

In philosophy – where works are generally still seen to be completed by lone, pensive thinkers in dark, pensive rooms – I believe that collaboration and AI will eventually transcend the “genius of one,” and rid us of the notion that the best work is done by solo minds.

If one philosopher spent 12 months aiming to compare and find connections between the ethics of Aristotle and Epictetus, I would argue that 12 very smart philosophers working together for 12 months might achieve much more insight.

Similarly, if intelligent algorithms could be created that could detect commonalities in terms, symbols, and meanings – entirely new connections and insights might be made possible, and much more vast reams of philosophical text could be analyzed in a much more uniform fashion – producing an objective perspective completely unattainable to human beings without an AI aide. I believe that this is already possible, though it’s applications in philosophy and the humanities in general seem almost nonexistent outside of a few events and experiments.

I believe very much in the power of the individual mind, and mean no disrespect to human capacity or to individual thinkers when I say that the era of the “genius of one” is going to progressively evaporate. In 1920, you might be able to win the Nobel Prize in your 40’s with a small team of researchers. In 2020, you’re more likely to win the Nobel Prize in your 60’s with a global research team that’s been hard at work for decades. Even the more “creative” domains of the humanities will experience a similar shift as collaboration becomes more common, research becomes more simple, and intelligence becomes more and more prevalent and nuanced.

Conclusion: Robot Shakespeare?

It is interesting to pose that at some point – potentially within this century, the best prose, the best novels, and the best philosophical insight will come not from individual geniuses, not even from teams of researchers, but almost entirely from AI.

This is not to say that I believe a “robot Shakespeare” will be in our midst anytime soon – but rather that we aught keep our minds open to the idea of AI being something other than calculators and cars that drive themselves. The nuanced connections of meaning can already be used to supplement human efforts with insights in so many domains, an in a period of 20, 40, or 60 years, we may see all elements of human capacity (not just statistical number-crunching) enhanced a billion-fold by AI’s of the future.

The ethical, political, and other implications aside, let us keep our eyes open for the implications of applied intelligence across all fields of human endeavor. We may question technology’s ability to contribute, but remember that it was less than 70 years between the early flights of the Wright brothers and landing on the moon. Might we seem a similar time frame between the advent of Amazon’s intelligent product offers and the replacement of humans at the helm of creative endeavor in writing, philosophy, poetry, and beyond. Only time will tell.

Thinking forward,

-Daniel Faggella