SirenML service is unavailable. Please contact your system administrator

Hi,
I am testing Siren Investigate for the first time.
I successfully installed SirenML via .zip plugin. However, when I start Investigate I get this error concerning the ML plugin:


I activated debug logging on config/investigate.yml as well as /etc/sirenml/sirenml.yml. I only get a 503 error:

respons [00:47:00.444]  GET /bundles/5ff1542dcc475555920015f954d56ecd.woff2 304 5ms - 9.0B
respons [00:47:00.952]  GET /bundles/3443cc888af3c04b49389a466cf74f0f.woff2 304 4ms - 9.0B
respons [00:47:01.079]  POST /api/saved_objects/bulk_get 200 6ms - 9.0B
respons [00:47:01.086]  GET /bundles/2e82488238926404a9d7eec1022cf609.woff2 304 4ms - 9.0B
--     >>>>>>>>respons [00:47:01.265]  GET /api/machine_learning/models 503 10ms - 9.0B <<<<<------
  log   [00:47:02.021] [debug][plugin] Checking Elasticsearch version
  ops   [00:47:02.318]  memory: 99.4MB uptime: 0:00:33 load: [0.66 0.62 0.54] delay: 0.520
  log   [00:47:04.550] [debug][plugin] Checking Elasticsearch version
  log   [00:47:07.095] [debug][plugin] Checking Elasticsearch version
  ops   [00:47:07.319]  memory: 101.1MB uptime: 0:00:38 load: [0.61 0.61 0.54] delay: 0.454
  log   [00:47:09.623] [debug][plugin] Checking Elasticsearch version
  log   [00:47:12.167] [debug][plugin] Checking Elasticsearch version

Do you know how can I overcome this issue? How can I see the more deep logging level?
Best regards,
JCP

Hello João,
To see more logging, set the DEBUG environment variable when starting Investigate like this:

DEBUG=machine-learning:* bin/investigate

This will give you more detailed logs about the queries sent from the Investigate server to the SirenML instance.

The 503 error indicates that the Investigate server can not find the SirenML instance.

  • Is the docker container up and running?

  • Is the container running on a different host, or mapped to a different port? If so, this needs to specified in the investigate.yml as

    machine_learning.siren_ml.uri: "http://hostname:port"
    

Thanks,
Dara

Hi Dara!
Thanks for the quick reply!
I had not realize I needed to install both the plugin and run docker. I thought I had to choose between one another.
Okay, I ran docker:
docker run --restart unless-stopped -d -p 5001:5001 -v /var/lib/sirenml:/var/lib/sirenml -v /etc/sirenml:/etc/sirenml sirensolutions/siren-ml:latest

But now I am getting this logging:

Do you have documentation regarding the requirements for this docker to run? I am running this on a single host Debian distribution.
Regards,
JCP

I also have Tensorflow 1.5 installed on my host

Hi João,

Was tensor flow 1.5 already installed in your machine? It may creates conflict since we are running the 1.13.1 version. Make sure the sirenML is using the correct one (1.13.1)

Best,

Davide

Hi,
I believe I cannot install tensorflow 1.13.1 on my local server due to cpu requirements.
I will check this on a AWS host.
Regards,
João Pereira

Ok, I installed this on an AWS host and it is running now.
Thanks for the support.