Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Table of Contents

...

With the same REST API, you can index logs directly from your application, or you can craft your own "log sender". 

NOTE:
If you are sending logs from your application use the Elasticsearch HTTP API. If you are sending logs from a Java application use a library like log4j2-elasticsearch-http or Jest instead of Elasticsearch TransportClient.


Besides specifying your Logsene app token as the index name, it's nice to have a field named "@timestamp".  Its value should be a valid ISO 8601 timestamp. This will be used for searching and sorting when/if you use Kibana with Logsene. If you don't provide a timestamp, Logsene will add one when it receives your message.

...

  • the @timestamp field is an ISO 8601 date
  • the geoip field is an object that contains a location geo point field (this works well if you're using Logstash)
  • the predefined fields host, facility, severity, syslog-tag, source and tagsare not analyzed, which enables only exact matches (you can still use wildcards, for example to search for web-server* and get web-server01)
  • all string fields are analyzed by whitespace and lowercased by default, enabling a search for message:hello to match an event with Hello World in the message field

...

Custom Log Index Mapping

If you need to define specific fields manually, you can use the Put Mapping API, where you specify logsene-receiver.sematext.com as the host name, 80/443 as the port, and your Logsene application token as the index name.  See the default log index fields (also known as index mapping) don't fit your needs you can create completely custom index mapping. See Custom Logsene Mapping Template How-To.

For example, let's assume you have a type called userlogs, where your logs have a float field called price. To define it upfront, you can run the following command (replacing $TOKEN with your Logsene app token):

Code Block
languagebash
curl -XPUT "https://logsene-receiver.sematext.com/$TOKEN/userlogs/_mapping" -d'
{
    "user-logs": {
        "properties": {
            "price": {
                "type": "float"
            }
        }
    }
}'

NOTE: if you already have a mapping in place, please note that some changes may not be compatible and should not be done.  For example, changing a field from float to integer won't be allowed.  If you already have data for some field and you change its mapping in an incompatible way, it needs to be re-indexed with new mapping settings. There are two options to handle this:

  1. Create a new Logsene app with new mapping and start shipping your logs there instead
  2. Remove the old mapping - which also remove all logs from the type - and retry putting the new mapping. Here's an example of removing a mapping:
     
Code Block
languagebash
curl -XDELETE "https://logsene-receiver.sematext.com/$TOKEN/userlogs"

Adding new fields doesn't require mapping deletion or reindexing.  Removing fields also doesn't require reindexing.  Of course, logs that have the removed fields should not be sent to Logsene after that.

Custom Log Index Mapping

See Custom Logsene Mapping Template How To. Note that if you have N different log structures, the best way to handle that is by creating N Logsene Apps, each with its own index mapping.  For example, you may have web server logs, your system logs in /var/log/messages, and your custom application logs.  Each of these 3 types of logs has a different structure.  The web server logs probably use Apache Common Log format, the logs in /var/log/messages have syslog structure, and your own application's logs can be in any format your application happens to use.  To handle all 3 log formats elegantly simply create 3 separate Logsene Apps, and use a different format for each of them.  See Custom Logsene Mapping Template How-To for details.