cloudpassage/connector

View on GitHub
README.md

Summary

Maintainability
Test Coverage
# Halo Event/Issue Connector v1.11

[![Build Status](https://travis-ci.org/cloudpassage/connector.svg?branch=master)](https://travis-ci.org/cloudpassage/connector)
[![Maintainability](https://api.codeclimate.com/v1/badges/0ba702a4cf12a6025067/maintainability)](https://codeclimate.com/github/cloudpassage/connector/maintainability)
[![Test Coverage](https://api.codeclimate.com/v1/badges/0ba702a4cf12a6025067/test_coverage)](https://codeclimate.com/github/cloudpassage/connector/test_coverage)

### Requirements

This tool requires the following Python packages:
* cloudpassage
* python-dateutil
* pytz

Install the requirements the easy way with `pip install -r requirements.txt`

### Intro - Quick Start

While this tool can be run by a variety of different scheduling mechanisms, the
most common pattern is to use cron.

In this example:
* We will use cron to run the event/issue connector.
* The event/issue connector will append Halo events/issues to file in key-value format.
(in this example, `/var/log/halo-events-kv.log`, `/var/log/halo-issues-kv.log`)
* In this scenario, we need to retrieve events starting on September 1st, 2021 (Halo
   only retains events for 90 days, so this date is just used as an example).

First, we check out this repository in `/opt/cloudpassage/`:

* `mkdir -p /opt/cloudpassage`
* `cd /opt/cloudpassage`
* `git clone https://github.com/cloudpassage/connector`

Next, configure Halo authentication information for the connector:

* In your CloudPassage portal account, generate an auditor (read-only) API key.
* Place the API key and secret, pipe-separated, in a file at
`/opt/cloudpassage/connector/configs/halo.auth`.  The file will contain one
line, which looks like this: `haloapikey|haloapisecret`, replacing `haloapikey`
with the key ID for your Halo API key, and replacing `haloapisecret` with your
API key's secret.

Next, configure cron to run the Halo connector every 5 minutes:

* Run `crontab -e`
* Add a line with the desired schedule:
* Streaming both HALO Events and Issues into system syslog:
```
*/5 * * * * /opt/cloudpassage/connector/halo_events.py --starting=2021-09-01 --issues_starting=2021-11-06 --auth=/opt/cloudpassage/connector/configs/halo.auth --configdir=/opt/cloudpassage/configs --eventtype=fim_target_integrity_changed --issues_type=fim --kvsyslog --issues_kvsyslog
```

Save and exit crontab.

Monitor the `/var/log/syslog` file to see events and issues from your Halo account.


* Streaming both HALO Events and Issues into local separate log files:
```
*/5 * * * * /opt/cloudpassage/connector/halo_events.py --starting=2021-09-01 --issues_starting=2021-09-01 --auth=/opt/cloudpassage/connector/configs/halo.auth --configdir=/opt/cloudpassage/configs --eventtype=fim_target_integrity_changed --issues_type=fim --kvfile=/var/log/halo-events-kv.log --issues_kvfile=/var/log/halo-issues-kv.log >/dev/null 2>&1
```

Save and exit crontab.

Monitor the `/var/log/halo-events-kv.log` and `/var/log/halo-issues-kv.log` files to see events and issues from your Halo account.


### Implementation Notes

__Multiple accounts:__

If you are extracting events/issues from more than one (supports up to 5) Halo
accounts, you can specify those in your halo.auth file like this:

```
key_id_1|secret_1
key_id_2|secret_2
...
...
key_id_5|secret_5
```

__CLI Options:__
```
usage: halo_events.py [-h] [--starting STARTING] [--issues_starting STARTING] --auth AUTH
                     [--threads THREADS] [--batchsize BATCHSIZE]
                     [--configdir CONFIGDIR] [--jsonfile JSONFILE] [--issues_jsonfile JSONFILE]
                     [--ceffile CEFFILE] [--leeffile LEEFFILE]
                     [--kvfile KVFILE] [--issues_kvfile KVFILE] [--facility FACILITY] [--cef] [--kv] [--issues_kv]
                     [--leefsyslog] [--cefsyslog] [--kvsyslog] [--issues_kvsyslog] [--sumologic] [--issues_sumologic]

Event/Issue Connector

optional arguments:
  -h, --help            show this help message and exit
  --starting STARTING   Specify start of event time range in ISO-8601 format
  --auth AUTH           Specify a file containing CloudPassage Halo API keys -
                        Key ID and Key secret pairs (up to 5)
  --threads THREADS     Start num threads each reading pages of events in
                        parallel
  --batchsize BATCHSIZE
                        Specify a limit for page numbers, after which we use
                        since
  --configdir CONFIGDIR
                        Specify directory for configration files (saved
                        timestamps)
  --jsonfile JSONFILE   Write events in raw JSON format to file with given
                        filename
  --ceffile CEFFILE     Write events in CEF (ArcSight) format to file with
                        given filename
  --leeffile LEEFFILE   Write events in LEEF (QRadar) format to file with
                        given filename
  --kvfile KVFILE       Write events as key/value pairs to file with given
                        filename
  --facility FACILITY   --facility=<faility,priority> Facility options:auth
                        authpriv cron daemon kern local0 local1 local2local3
                        local4 local5 local6 local7 lpr mail news sysloguser
                        uucp Priority options: alert crit debug emerg errinfo
                        notice warning [default: user,info]
  --cef                 Write events in CEF (ArcSight) format to standard
                        output (terminal)
  --kv                  Write events as key/value pairs to standard output
                        (terminal)
  --leefsyslog          Write events in LEEF (QRadar) format to syslog server
  --cefsyslog           Write events in CEF (ArcSight) format to syslog server
  --kvsyslog            Write events as key/value pairs to local syslog daemon
  --sumologic           Send events (JSON) format to Sumologic. Must specify sumologic_https_url in configs/portal.yml
  --eventtype           Filter retrieved events based on event type (i.e. --eventtype=lids_rule_failed,sca_rule_failed)
  --issues_starting STARTING    Specify start of issue time range in ISO-8601 format
  --issues_jsonfile JSONFILE   Write issues in raw JSON format to file with given
                        filename
  --issues_kvfile KVFILE       Write issues as key/value pairs to file with given
                        filename
  --issues_kv                  Write issues as key/value pairs to standard output
                        (terminal)
  --issues_kvsyslog            Write issues as key/value pairs to local syslog daemon
  --issues_sumologic           Send issues (JSON) format to Sumologic. Must specify sumologic_https_url in configs/portal.yml
  --issues_type           Filter issues based on issue type (i.e.fw, lids, sva, csm, sam , agent, and fim)
```

### Halo Connector on Linux

* Install Python 3.6 or newer (https://www.python.org/downloads)

* Once Python is installed, install the necessary Python modules:

```
pip install -r requirements.txt
```


* Download the Halo Connector (https://github.com/cloudpassage/connector)

6. Create the `halo.auth` file

7. Run the connector (must specify a starting/issues_starting cli parameter)

```
python halo_events.py --auth=halo.auth --starting=YYYY-MM-DD
```

### Halo Connector on Windows

* Install Python 3.6 or newer (https://www.python.org/downloads/windows/)

* Add python installation folder to system PATH environmental variable or
create PYTHONPATH environment variable and set installation folder location as
follows (C:\Python36\lib;C:\Python36)

* Once Python is installed, install the necessary Python modules

```
python -m pip install -r requirements.txt
```

* Download the Halo Connector (https://github.com/cloudpassage/connector)

* Create the halo.auth file

* Run the connector (currently must specify a starting/issues_starting cli parameter)

```
python halo_events.py --auth=halo.auth --starting=YYYY-MM-DD
```

#### Remote Syslog Windows
* Navigate to `configs/portal.yml` you can specify the syslog host there via

  windows_syslog_host:
  windows_syslog_port:


### Halo Connector as Docker Container

#### Requirements:
  * Docker engine (https://docs.docker.com/engine/install/)
  * A Halo account (with auditor role)
  * Clone HALO Connector repository

```
git clone https://github.com/cloudpassage/connector.git
```

  * Navigate to the root directory of connector repository
  * Place the Halo API key ID and secret, pipe-separated, in a file at `/configs/halo.auth`.  The file will contain one
    line, which looks like this: `haloapikey|haloapisecret`, replacing `haloapikey` with the key ID for your Halo API key, and replacing `haloapisecret` with your
    API key's secret.

  * build the connector docker image:
```
sudo docker build -t cp_connector .
```

  * Run the docker container:
```
sudo docker run -it --rm \
    -v /LOCAL/PATH/TO/LOGS:/var/log \
    -v /dev/log:/dev/log \
    -v /LOCAL/PATH/TO/CONFIGS:/app/configs \
    cp_connector \
    --auth=/app/configs/halo.auth \
    --configdir=/app/configs \
    --starting=YYYY-MM-DD \
    --issues_starting=YYYY-MM-DD \
    --eventtype=SUPPORTED_EVENT_TYPE \
    --issues_type=ISSUE_TYPE \
    --kvsyslog \
    --issues_kvsyslog
```
 
#### Examples:

##### Retrieving both HALO Events and Issues into system syslog
```
$ sudo docker run -it --rm \
    -v /var/log:/var/log \
    -v /dev/log:/dev/log \
    -v /home/ubuntu/connector/configs:/app/configs \
    cp_connector \
    --auth=/app/configs/halo.auth \
    --configdir=/app/configs \
    --starting=2021-12-02 \
    --issues_starting=2021-11-06 \
    --eventtype=fim_target_integrity_changed \
    --issues_type=fim \
    --kvsyslog \
    --issues_kvsyslog
```
  * Monitor the `/var/log/syslog` file to see retrieved events and issues from your HALO account. 
  * You can filter HALO events by setting event type to any of the supported event types illustrated in [EVENT TYPES](SUPPORTED_EVENT_TYPES.md) document.
  * You can filter HALO issues by setting issue type to any of the supported issue types (i.e.fw, lids, sva, csm, sam , agent, and fim).
  
##### Forward local syslog to remote syslog
  * To send retrieved HALO events/issues from local syslog into remote syslog server follow the instructions in [REMOTESYSLOG](REMOTESYSLOG.md)

##### Retrieving both HALO Events and Issues into separate log files
```
$ sudo docker run -it --rm \
    -v /var/log:/var/log \
    -v /dev/log:/dev/log \
    -v /home/ubuntu/connector/configs:/app/configs \
    cp_connector \
    --auth=/app/configs/halo.auth \
    --configdir=/app/configs \
    --starting=2021-10-01 \
    --issues_starting=2021-11-06 \
    --eventtype=lids_rule_failed \
    --issues_type=sva \
    --kvfile=/var/log/halo-events-kv.log \
    --issues_kvfile=/var/log/halo-issues-kv.log 
```
  * Monitor both `/var/log/halo-events-kv.log` and `/var/log/halo-issues-kv.log` files to see retrieved events and issues from your HALO account.