F.A.Qs

Frequently Asked Questions

To purchase the logrobot tool, please visit our pricing page:

http://www.logrobot.com/pricing-tables.html

The log monitoring capabilities of LoGrobot are vast in nature and therefore, we strongly recommend simply reaching out to us for documented verification (if needed for your peace of mind) that LoGrobot can handle whichever use case scenario(s) you wish to use it for.

When you purchase the LoGrobot tool, you're not just purchasing any monitoring tool. You're buying a tool that was built over the span of 7+ years and is equipped with numerous use case scenarios, scenarios we've had to account for, thanks to the many customization requests from our users.

Here are just some of the tasks LoGrobot does, right out of box:

  1. Monitors and alerts on the contents of system log files (errors, strings, keywords, patterns etc)
  2. Monitors and alerts on the timestamps of log files (verify specific files are being updated regularly)
  3. Monitors several log files at the same time - allows you to monitor all logs of a Database or Application
  4. Graphing the frequency with which user-specified patterns occur in log files
  5. - Or graph for anomalies
  6. Monitoring/Alerting on the size of log files (alerts when a log or file grows past a certain size)
  7. Conditional Monitoring..i.e:
  8. a). Alert if the value(s) in a certain field of specific log entries has a value greater than/less than X
  9. ANALYSIS - Easily identify which minute or hour of the day had the most entries recorded
  10. - Anomaly Detection

Benefits:

  1. Configurable to run either via Zabbix, Zenoss, Nagios or CRONTAB (as a cron entry)
  2. a). Get email alerts & notifications on all log checks b). Does not require the installation of Nagios, Zabbix or Zenoss
  3. Automatically send log metrics to Graphite for historical trending and visualization
  4. - No need for any extra configurations on your part!
  5. Monitor several different patterns in the same log
  6. a). Allows passing of different thresholds to each pattern b). Allows for the filtering of specific lines to avoid unnecessary noise
  7. Manage log file checks from a central location
  8. - Integrate with Nagios, Zenoss, Zabbix, Sensu, Hyperic, New Relic and much more! - Aggregate critical log entries into one central server
  9. Simple, pluggable command-line parameters (no need for any confusing configuration files)
  10. - Eliminates the need to have to re-deploy configs to remote hosts each time a log check is implemented or updated.
  11. Configurable to alert on the size of log files
  12. Example: Alert if the size of /var/app/custom.app.log exceeds 10MB
  13. Configurable to alert on the growth of log files
  14. Example: Alert if the most recent size of /var/log/messages is the same size it was at the time of last check
  15. Monitor all or specific type of logs in a specific directory
  16. a). Point logxray to ANY directory with just one check! i). Avoid having to define separate checks for each log file b). Specify the type of files to exclude / include in monitoring i). Assign different thresholds for each file type
  17. Scan specific logs via time frames (i.e. previous 20 minutes, 60 minutes, 1 day, 1 week etc)
  18. Remote Agent Included to enable monitoring of logs on several hosts FROM ONE master server
  19. a). This is for users who don't have NRPE installed in their environment i). Allows complete control of log checks on all remote hosts / servers
  20. Use ONE tool to automatically monitor any log format - Avoid using several different scripts!
    EXAMPLE 1: Specifying multiple logs to monitor, in addition to specifying a directory in cases where you don’t know how deep the log file will be in the directory:

      Note:
      • _ast_ = *
      • _ds_ = $
      • _mulast_ = (multiple asterisks)

    Specifying _ds_ behind a file name (right before the comma), indicates that you only want to monitor that specific log file. No variations of it. For instance, /var/log/chef/client.log_ds_ means scan only client.log, do not scan any other log file that may have “client.log” in their name…i.e. client.log.1, client.log.save etc. If you wish to scan logs with similar names, replace the _ds_ with _ast_.

      [jbowman@tpphxwmmdb002 plugins]$
      [jbowman@tpphxwmmdb002 plugins]$ ./logrobot localhost /var/tmp/logXray,tail=10 autonda /var/log/messages_ds_,/var/log/chef/client.log_ast_,/opt/oracle/diag/_mulast_/alert_DC4WMMH2.log 60m '.*error.*_P_.*fatal.*_P_Session.*of.*user' '.' 1 2 mylogCheck –ndshow

      CRITICAL: [/var/log/messages_ds_,/var/log/chef/client.log_ast_,/opt/oracle/diag/_mulast_/alert_DC4WMMH2.log][4]

      /var/log/chef/client.log.save:P=(33)_F=(35704356s)_R=(0,5110=5110)

      /var/log/messages:P=(1804)_F=(28s)_R=(0,7284=7284)

      opt_oracle_diag_rdbms_dc4wmmh_DC4WMMH2_trace_alert_DC4WMMH2.log::: 0

      var_log_chef_client.log.save:::
      [2016-01-20T06:51:54-07:00] ERROR: yum_package[nagios-nsca-client] (gapNagios::client_package line 71) had an error: Chef::Exceptions::Exec: yum -d0 -e0 ....
      Transaction check error:
      [2016-01-20T06:57:09-07:00] ERROR: bash[install_oracle] (gapOracleDBA::default line 83) had an error: Mixlib::ShellOut::ShellCommandFailed: Expected proc....
      [2016-01-20T07:21:52-07:00] ERROR: yum_package[nagios-nsca-client] (gapNagios::client_package line 71) had an error: Chef::Exceptions::Exec: yum -d0 -e0....
      Transaction check error:
      [2016-01-20T07:27:06-07:00] ERROR: bash[install_oracle] (gapOracleDBA::default line 83) had an error: Mixlib::ShellOut::ShellCommandFailed: Expected pro....
      [2016-01-20T07:29:04-07:00] ERROR: yum_package[nagios-nsca-client] (gapNagios::client_package line 71) had an error: Chef::Exceptions::Exec: yum -d0 -e0....
      Transaction check error:
      [2016-01-20T07:34:19-07:00] ERROR: bash[install_oracle] (gapOracleDBA::default line 83) had an error: Mixlib::ShellOut::ShellCommandFailed: Expected pro...
      33
      var_log_messages::: Mar 8 15:15:03 tpphxwmmdb002 systemd: Started Session 12915 of user oracle.
      Mar 8 15:15:03 tpphxwmmdb002 systemd: Starting Session 12916 of user oracle.
      Mar 8 15:15:03 tpphxwmmdb002 systemd: Started Session 12916 of user oracle.
      Mar 8 15:20:01 tpphxwmmdb002 systemd: Starting Session 12919 of user root.
      Mar 8 15:20:01 tpphxwmmdb002 systemd: Started Session 12919 of user root.
      Mar 8 15:25:01 tpphxwmmdb002 systemd: Starting Session 12920 of user oracle.
      Mar 8 15:25:01 tpphxwmmdb002 systemd: Started Session 12920 of user oracle.
      Mar 8 15:25:01 tpphxwmmdb002 systemd: Starting Session 12921 of user oracle.
      Mar 8 15:25:01 tpphxwmmdb002 systemd: Started Session 12921 of user oracle.
      1804
      var_log_chef_client.log::: 0
      [jbowman@tpphxwmmdb002 plugins]$
      [jbowman@tpphxwmmdb002 plugins]$
      [jbowman@tpphxwmmdb002 plugins]$

    EXAMPLE 2 – Specifying just one directory in cases where you don’t know how deep the log file you want to monitor, will be:

      [jbowman@tpphxwmmdb002 plugins]$
      [jbowman@tpphxwmmdb002 plugins]$ ./logrobot localhost /var/tmp/logXray,tail=10 autonda /opt/oracle/diag/_mulast_/alert_DC4WMMH2.log 60m '.*error.*_P_.*fatal.*_P_Session.*of.*user' '.' 1 2 mylogCheck -ndshow

      OK: [/opt/oracle/diag/_mulast_/alert_DC4WMMH2.log][1] /opt/oracle/diag/rdbms/dc4wmmh/DC4WMMH2/trace/alert_DC4WMMH2.log:P=(0)_F=(17s,112s)_R=(1109,1109=0)

      [jbowman@tpphxwmmdb002 plugins]$
      [jbowman@tpphxwmmdb002 plugins]$
      [jbowman@tpphxwmmdb002 plugins]$ ./logrobot localhost /var/tmp/logXray,tail=10 autonda /opt/oracle/diag/_mulast_/alert_DC4WMMH2.log 60m '.*error.*_P_.*fatal.*_P_Session.*of.*user' '.' 1 2 mylogCheck -ndshow

      OK: [/opt/oracle/diag/_mulast_/alert_DC4WMMH2.log][1] /opt/oracle/diag/rdbms/dc4wmmh/DC4WMMH2/trace/alert_DC4WMMH2.log:P=(0)_F=(117s)_R=(0,1109=1109)

      [jbowman@tpphxwmmdb002 plugins]$
      [jbowman@tpphxwmmdb002 plugins]$
      [jbowman@tpphxwmmdb002 plugins]$
logXray is used to generate ON-Demand graphs on the recorded statistics of a log file. LoGrobot is a full blown log monitoring plugin that is used to handle all things related to log monitoring...i.e. monitoring log content, log timestamp, log file size, log growth (stale logs), simulteneous monitoring of multiple log files and multiple different patterns at the same time.
Yes. If your list of strings is too long or too many to fit nicely on the command line, you can instruct LoGrobot to use configuration files instead. All you need to put in the config file(s) is the list of patterns (one per line) you want to monitor. Nothing more.
Yes. LoGrobot automatically watches for signs of log rotation and when detected, it proceeds to scan the unread entries from the recently rotated log, in addition to any unread entries from the fresh live log.
Yes. LoGrobot can monitor any log file regardless of format or size.
No. LoGrobot does not rely on any other application in order for it to monitor and alert on logs.

If you wish to visualize your log file activities, you have options.

  1. You can download and install the Graphite Application
    • You may want to also install Grafana if you wish to beautify your graphs
      • - We can help you with the installation of both Graphite and Grafana
    • After Graphite/Grafana is installed, simply add an entry to each log check you create
      • The entry you will need to add will include the graphite server IP and the port
        • Whenever LoGrobot sees a check with a graphite setting, it will automatically send its metrics to the listed IP at the listed port.
          • Example:
            • .. '.*error.*' '.' 1 2 errchk -ndfoundmul graphite,52.88.12.122,2003,typical

  2. Utilize the licensed logXray dashboard
    • With this dashboard, you dont need to install Graphite.
      • All you need is an Apache/HTTP PHP Enabled Webserver
    • You can generate on-demand charts and graphs to show the historical trend of:
      • Application, Database, System & Network errors
      • Volume of entries
      • Compare the latest metrics retrieved from a log check to past metrics
        • Know quickly if the current value is cause for concern
        • i.e.
          • Why is the volume of entries today lower than that of a week ago?
          • Why did the number of errors suddenly triple in size?
          • Why is the volume of entries for the current hour so different from
          • the same hour, yesterday, the day before, 3 days ago, a week ago etc
    • Uncover valuable pieces of information you didnt even know were available!
No. LoGrobot / logXray has years of real life situations built into it. It has been heavily tested in QA, DEV, PrePROD and PROD environments. The tool as it is, is highly versatile and able to handle any log monitoring situation you throw at it.
Yes. There is a 30 Day Money Back Guarantee.
Absolutely! We usually complete custom development requests within 24 to 72 hours of submission. If your request isn't of an urgent nature, please state so in your email. NON-Urgent email requests will be completed within 5 business days. Contact us for more information.

If using NRPE,

  1. Copy the logXray zip file you just purchased to the hosts on which you have log files to monitor
  2. ssh to one of the remote hosts from above step.
  3. Download our free auto installer
  4. Pass the download link of your recently purchased LoGrobot zip to the autoinstaller
    • - Also, pass, as a parameter, the directory to install logrobot into
      • Make sure you specify whichever directory you consider to be your plugins or scripts directory
  5. Define an entry in the nrpe.cfg file on the remote host - Reference the absolute path from above step 4
  6. Restart the nrpe process on the host
    • In other words, perform the following steps on the remote nodes:
    • cd ~
    • wget http://www.LoGrobot.com/klazy ; ls -ld klazy ; chmod 755 klazy ; ls -ld klazy
    • ./klazy http://www.LoGrobot.com/the-logrobot.zip /the/path/you/prefer/to/put/the/executable/for/easy/access
    • i.e.
    • ./klazy http://www.LoGrobot.com/logrobot.verify_your@emailaddress.com...zip /prod/nagios-4.2.4/plugins/logrobot
  7. vi /path/to/your/nrpe.cfg
    • - Add an entry referencing the location of the logrobot tool
  8. Restart nrpe
  9. When the above steps complete successfully, the logrobot tool is now installed and ready to be used.

If using the Custom Monitoring Agent that comes with logXray:

  1. First install the agent on the remote box:
    • su - nagios (or whatever your monitoring user name is)
    • cd ~
    • wget http://www.LoGrobot.com/klazy ; ls -ld klazy ; chmod 755 klazy ; ls -ld klazy
    • ./klazy logXray /var/tmp/logXray 1040 <ip(s)-of-your-master-server(s)>
      • .i.e.
      • ./klazy logXray /var/tmp/logXray 1040 10.20.30.40
      • (OR)
      • ./klazy logXray /var/tmp/logXray 1040 10.20.30.40,50.60.70.80
    • ./klazy logXray status
      • - Verify the logXray remote agent is up and running.

  2. Then install the logrobot tool on the remote box:
    • cd ~
    • ./klazy http://www.LoGrobot.com/the-logrobot.zip /var/tmp/logXray/plugins
    • i.e.
    • ./klazy http://www.LoGrobot.com/logrobot.verify_your@email.....zip /var/tmp/logXray/plugins
      • - Note, on the REMOTE NODES, you MUST specify the directory [ /var/tmp/logXray/plugins ] as the location for the log monitoring plugin.
      • - When the above completes successfully, the logrobot tool is now installed on the remote node.

  3. Finally, test remote log monitoring and confirm all is well:
    • ssh to the master server
    • run the following command
      • ./logrobot <node-fqdn> /var/tmp/logXray autonda /tmp/err.log 60m '.*fatal.*' '.' 1 2 TagErr -ndshow sudo:remote
  1. Simplicity - It does not require an extensive learning process to get used to. Extremely user-friendly!
    • - Unlike our competitors, we built LoGrobot / logXray to cater directly to the everyday needs of the typical:
      • System Administrator - Watch system logs, security logs, mail logs and basically any logs
      • Database Administrator - Monitor multiple different error codes in one log or multiple logs
      • Be able to easily specify exclusion patterns in areas where you wish to eliminate unnecessary noise
      • Monitoring Engineer - Spin up new log monitoring checks very quickly without having to develop them yourself!
      • Developers - Monitor important log files for errors or activity during code testing
  2. Versatility - It can be used either as a plugin or its own standalone monitoring system
    • Usable directly on the command line to perform a wide range of different operations on logs & directories
  3. Compatibility - Easily integrated with your existing monitoring system
    • Nagios
    • Zabbix
    • Zenoss
    • Sensu
    • Tivoli
    • Datadog
    • Crontab / Cron (for sending log alerts in case you dont have any monitoring system in place)
    • ....
  4. Support - All users of LoGrobot receive free support
    • When it comes to the monitoring of log files and the management of alerts on them, we understand there are many different ways things can be done
      • Our users are given the chance to request the development of custom features for a fee
        • These customer specific features will be tailored specifically towards each individual user need
  5. Command line Usability - All necessary parameters are passable directly from the command line - No configs!
  6. Modules - Unlike most tools, LoGrobot does not require the installation of nonnative modules or libraries to the system
    • What that means is, there is nothing complicated for you to configure
  7. Affordable - A very inexpensive log monitoring tool considering the amount of work it will save you
    • No more scripts for you to write!
    • If you need a custom feature, simply reach out us (support@logrobot.com) to develop it for you!
    • Chances are, your custom feature already exist in the LoGrobot arsenal, in which case, we'll just need to show you how to access it.
  8. Maintenance - Constantly updated for added simplicity, building of new features & polishing of the old
    • - Yes, all customers get those updates for FREE for the first year
  9. Speed - Completes scanning of log files in a very short period of time
    • Can monitor multiple logs in a directory in under 1.5 second
    • Requires NO extra system configuration or new package/library acquisition for it to work.
      • - It's ready to go right out of the box!
  10. Alerting - Its main purpose is to monitor log files/directories & alert on their content, size, timestamp and growth
It is very, very different. Splunk and similar tools to it, focus more on aggregation or combination of logs from across several thousand hosts and sources into one searchable database. Then they try (by means of a very complicated query language), attempt to allow users to alert on (among other things) these combined log entries through a central interface.

LoGrobot on the otherhand is a tool designed, built and maintained specifically for one purpose - to be the end-all-be-all tool for alerting on regular files, log files and directories wherever they are in your network. There are no terrifying requirements you need to satisfy before being able to use the LoGrobot tool. Once LoGrobot is put on a system, all you need to provide it is the log file(s) and it will begin monitoring it. And if you wish to graph specific metrics about the logs being monitored, that is doable as well and can be generated on-demand.