How it works
Basics
The velmai application architecture consists of a small front end system, which can operated in multiple user facing systems (website, mobile app, messaging platform, etc), and a suite of servers, which process user input and produce the AI response.
Front end
Our frontend software is very small (about 10 lines of HTML and javascript), this allows us to integrate into multiple environments with minimal impact.
The front connects with our suite of backend servers without the need to use the customer's backend systems. Again implementing with minimal impact.
The backend system processes the input (user chat with bot), and returns the bots response. Again, directly to the frontend software.
This direct connection is made possible by opening an XML pipe, allowing transparent operations with no need for browser/ app/ messaging platform refresh.
There is an option to move some backend function from velmai servers to the customer's servers for performance reasons, but it is generally not required.
Back end
The backend system, is where the user input is processed using multiple (currently 36) loadbalanced servers. The processing consists of several layers of parsing, weighting and response building layers. These are also combined with learning both new things, and also strengthening already existing knowledge.
The back end systems can also interact with other velmai bot interactions and operate in a symbiotic hive mode, sharing knowledge and processing, and learning from each other.
Also in this environment the velmais can connect to, and interact with, external devices and knowledge systems, either for a source of knowledge or a source of output.
Logical architecture
The schematic below shows on a high level how the system briefly described above works on a logical level.