architecture of ConicIT


ConicCollect - Analyzing Mainframe Performance

The heart of ConicIT system is ConicCollect which runs and manages the required investigations. The required investigations are defined using ConicAdmin, and ConicCollect is responsible for performing them every minute. Each investigation starts when ConicCollect connects to the data source which is usually a mainframe monitor such as TMON, SysView, or Omegamon, but can also be some external source. Connecting to a mainframe monitor is done by using IBM’s S3270 application which allows ConicCollect to communicate with the monitors just like a human user would do. Connecting to external sources is done using the External API.
ConicCollect sends specific requests to the mainframe monitors, parses the responses screens into variables and sends the information to the Processing and Alerts Engine.

S3270 - Getting the information from monitors

An application from IBM which allows communication with mainframe monitors via TN3270 protocol. ConicCollect is using S3270 application to bring information from the mainframe without the need for special agent and without affecting the performances of the mainframe.

External API - getting performance information from external systems

ConicIT provides this special API which allows us and our customers to easily add connectivity to virtually any system, such as web-services, databases, open-systems, or any other source. ConicCollect can get and process information from these sources via the External API just like it gets information from mainframe via S3270 application.

Profiles Calculator - Calculating statistics over the performance metrics

Every night the Profile Calculator runs on the background in order to calculate and characterize the profile of the typical behavior of each variable. The Profiles Calculator is using our unique machine learning algorithms for studying and predicting the normal behavior of the metrics at each day and hour. This profile is specific for each day and hour, so for example, the same variable will get different profile for Sunday 10:00am, Monday 10:00am, and Monday 18:00. The profiles are calculated separately for thousands of variables, and each profile is based on few weeks of history. This characterization of the variables typical behavior is later on used by the Processing and Alerts Engine for detection of anomalies in the performance of your system. It means that the same scenario can either cause or not cause alert, depending on the day, the hour, and the specific variable. For example, ConicIT can regard 5 minutes of 30% CPU of some task as normal or as anomaly, depending on the behavior of that specific variable at the specific hour and day of the week.

Processing and Alerts Engine

This is the core analysis and algorithms engine of ConicIT. It’s responsible for:

  • Analyzing mainframe performance and performance of external systems
  • Creating calculated-variables (e.g. ratio between different metrics, rate of consumption of resources and any other calculated-variables)
  • Applying any business logic as defined in ConicAdmin
  • Applying unique machine learning algorithms for anomalies detection algorithms and specific terms for alerts
  • Sending alerts via any of the required channels: ConicWatch, Alerts to SplunkTM, SNMP traps, alerts file, TCP, or for web-service
  • Preparing the UI for ConicIT Web UI with clear unified presentation of the results of each investigation, including combination of data from different type of sources
  • Preparing graphs data in DB

SplunkTM Connector

With ConicIT’s new SplunkTM Connector you can get any data from your mainframe directly into SplunkTM. You can choose exactly which data you want to get into your SplunkTM including calculated variables and information about the alerts, and save huge amounts of money by dramatically reducing the amount of data that you send to SplunkTM. With our SplunkTM Connector you can send all and only the relevant data to SplunkTM, and once there’s an alert ConicIT can send extra details to SplunkTM, which can help you to analyze the root cause of the problem that was found.


We use standard Postgresql database to store the data which was parsed and calculated by ConicIT. The DB is typically installed on the same Linux machine where ConicIT is installed.

External API

ConicIT offers the External API which allows ConicIT to receive information not only from mainframe but actually from virtually any external system, including open systems, external databases, remote Linux machines, files, web-services or any other source. In order to connect a new type of data source all we need to do is to implement three simple C functions and to create a shared library file out of that implementation, which easily integrates into ConicCollect.

ConicIT Web UI

Our web application is the main user interface of ConicIT, allowing the user to see the results of each investigation, and the graphs and summaries of the analysis, including investigations which took place in the past. It allows you to see graphs of the variables, in which you can also see a green zone that visualize the normal predicted range of each variable at each hour. You can also watch graphs which compare the values of different variables or graphs which compare the figures of the same variable on different days.

SNMP - sending alerts to external systems

ConicIT can send alerts via SNMP traps to any SNMP server.

Alerts to SplunkTM

ConicIT can also send the alerts to your SplunkTM.

ConicWatch - Popping up alerts on your PC

ConicWatch is the easiest way to receive alerts from ConicIT on time. ConicWatch is a Windows application that runs on your system trays and notifying you whenever a new alert was sent by ConicIT (by the Processing and Alerts Engine). ConicWatch shows a list of all recent alerts, and clicking on an alert will send you to the relevant page in ConicIT Web UI.

ConicAdmin - Configure ConicIT

ConicAdmin is a Windows application which allows us to configure the required investigations and business logic of ConicIT. When installing or upgrading your ConicIT solution, we use ConicAdmin to define which mainframe screens to scrape, which variables to take, which smart variables to calculate, which variables to send to SplunkTM, how to present the data in the ConicIT Web UI, which alerts to enable, and what business logic to apply in each investigation. ConicAdmin allows you to process the data with Perl code and using our comprehensive Perl API, which brings you infinite opportunities for customization.

Read more about ConicIT Mainframe Performance Analytics