Works & Experience
- BRAIN2 SS
- MFC Service
- BRAIN2 BLM
- BRAIN2 Text
- BRAIN2 Chat
- BRAIN2 BI
- Pushkin
Russian neural networks (neuromodels) and Artificial intelligence works by Cognitive Systems Company
Semantic Search System BRAIN2SemanticSearch
It is a robot assistant that "reads" the text of any volume and subject in a small amount of time and then answers the questions in natural language. At the core of the service lies a Semantic Vector Model which is aimed at finding semantic links in the text and has the ability to take into account the context for each word. The system is able to understand texts in natural language without preliminary markup by a person.
MFC Service
BRAIN2 MFC is a software for Promobot robots that are “working” at multifunctional government service centers. It allows the visitors to choose the needed service on request using natural language.
Key software features
A. Identification of queries, which do not correspond to the topic and providing relevant answers to correct questions. For example, on the question “Where do I get a passport of a pirate?” the system will answer “I don’t know”. On the other hand, a correct question “Where do I get a passport” will be answered with a relevant MFC service.
B. Paraphrase recognition. The system gives the correct answer to the various forms of a question – for example, "how to get a passport?", "how to get an identity card?", etc.
C. Assessment of the system's responses "confidence level" . If the system’s "confidence" is below the specified level, it can either reply "no answer", or provide the most relevant option, but notify that it is not sure about it.
The model shows 95.6% of correct classifications when tested with correct and incorrect queries.
BRAIN2 BLM — Big Linguistic Model
Big Linguistic Model is a kind of language processor capable of working with Russian and English texts.
The model is able to detect errors in words and sentences, determine the morphological properties of words and construct phrases according to linguistic norms. On the basis of BLM it is possible to create intelligent dialogue and reference systems trained on natural text without pre-processing and splitting it into question-answer pairs, which significantly simplifies the task.
At the moment the link below allows to test one of the BLM modules – words spelling checker — BRAIN2 Spell.
Go to the demo page
Works Report №1 — BRAIN2 Spell
Works Report №2 — BRAIN2 Morph
BRAIN2 Text
Cases based on Several multi-layer neuromodels (automated)
BRAIN2 Text (DSSM). Example of an intelligent assistance system.
5 multi-layer BRAIN2 Text models, trained according to the text fragment of the Sberbank strategy, consisting of 8200 words and 800 sentences, allows to perform a semantic search of query-relevant tokens for a response with an accuracy of 0.86 and synthesize from these lexemes a response in the form of a sentence with an accuracy of 0.91 (f-measure 0.9).
Go to the demo pageBRAIN2 Chat
Cases based on multi-layer neuromodels (automated)
BRAIN2 Business Intelligence
Cases based on one multi-layer neuromodel
House Prices Kaggle Task:
The House Prices model trained for 2 minutes 33 seconds on 79 parameters of 1461 objects of real estate predicts the cost of an object with an average error < 0.42 (classification time <1sec.)
Digit Recognition (MNIST) Kaggle Task:
The model for recognition of handwritten digits (Digit Recognizer) trained for 20 minutes on 784 parameters (28x28 pixels image) for 28 000 images allows to recognize handwritten numbers with an accuracy of 81% (classification time 1-2sec.).
BRAIN2 Helpdesk (SSM). Intelligent search through the contents of files.
Three BRAIN2 Helpdesk models were trained in 100 documents containing 1200 sentences. They allow instant semantic search of relevant documents with an accuracy of 88%.
The answer is also provided in a variative Telegram сhat-bot.
Research project
Based on Several multi-layer neuromodels
Pushkin Project
The aim of the project is to teach AI to compose poems in the style of the most prominent Russian poet (quatrains with iambic tetrameter). To do this, the company has developed models to determine and select rhymes and stresses in the word. Moreover, work is underway on a model of semantic associations over a group of words and text combinations.