Sire-Federate Plugin with Elasticsearch Helm-Chart Installation

I’m currently trying to deploy siren federate plugin on our elasticsearch.

However:

Our Elasticsearch is deployed as a helm chart on Kubernetes (AKS).

You can get the chart source with:


helm pull elastic/eck-elasticsearch

unzip eck-elasticsearch-0.14.0.tgz

Inside the values.yaml there’s a block to specify siren plugin install command, like this.


    # List of initialization containers belonging to the pod.
    #
    # Common initContainers include setting sysctl, or in 7.x versions of Elasticsearch,
    # installing Elasticsearch plugins.
    #
    # https://kubernetes.io/docs/concepts/workloads/pods/init-containers/
    initContainers:                                                                                                                                                                                                                                                                         - command:
      - sh
      - "-c"
      - bin/elasticsearch-plugin install --batch https://download.support.siren.io/federate/8.15.5-37.1.zip
      name: install-plugins
      securityContext:
        privileged: true

However, when installing the helm chart with plugin installation enabled we see no indication it’s enabled.

Do you have any step-by step instructions on dealing specifically with the case of siren plugin for elasticsearch helm chart installation?

Hi Traiano,

Can you please verify if the plugin is installed in case Helm chart installation is successful:

curl -X GET "http://<elasticsearch-host>:9200/_cat/plugins?v"

If it’s not installed can you also try to add a volume mount for this to work.

like this:

     initContainers:                                                                                                                                                                                                                                                                         
        command:
        - sh
        - "-c"
        - bin/elasticsearch-plugin install --batch https://download.support.siren.io/federate/8.15.5-37.1.zip
        name: install-plugins
        securityContext:
          privileged: true
        volumeMounts:
        - name: plugins
          mountPath: /usr/share/elasticsearch/plugins

If the Helm chart installation is not successful please share full values.yaml and error you are getting.

Regards
Manu

Hi Manu, thanks for you response.

I made the changes you suggested and reinstalled elastic. Here are my results:

root@nginx:/# curl -v -k -u "elastic:Cv2------------------" -X GET "https://eck-elasticsearch-es-http.elastic:9200/_cat/plugins?v"
Note: Unnecessary use of -X or --request, GET is already inferred.
*   Trying 10.0.2.43:9200...
* Connected to eck-elasticsearch-es-http.elastic (10.0.2.43) port 9200 (#0)
* ALPN: offers h2,http/1.1
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384
* ALPN: server did not agree on a protocol. Uses default.
* Server certificate:
*  subject: OU=eck-elasticsearch; CN=eck-elasticsearch-es-http.elastic.es.local
*  start date: Jan 13 10:54:27 2025 GMT
*  expire date: Jan 13 11:04:27 2026 GMT
*  issuer: OU=eck-elasticsearch; CN=eck-elasticsearch-http
*  SSL certificate verify result: self-signed certificate in certificate chain (19), continuing anyway.
* using HTTP/1.x
* Server auth using Basic with user 'elastic'
> GET /_cat/plugins?v HTTP/1.1
> Host: eck-elasticsearch-es-http.elastic:9200
> Authorization: Basic ZWxhc3RpYzpDdjJ2MzBvUTNWYUZMM3ExNXYyVDZGSTU=
> User-Agent: curl/7.88.1
> Accept: */*
>
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
< HTTP/1.1 200 OK
< X-elastic-product: Elasticsearch
< content-type: text/plain; charset=UTF-8
< Transfer-Encoding: chunked
<

name component version

* Connection #0 to host eck-elasticsearch-es-http.elastic left intact

I note, no plugins are returned in the list.

Looking at the container logs shows no mention of the federate plugin:

azureuser@bayleaf-bastion:~/.traiano/eck-deployment/bayleaf/infrastructure/azure-aks/helm-charts/elastic$ kubectl logs eck-elasticsearch-es-default-0 -n elastic | egrep -i federate
Defaulted container "elasticsearch" out of: elasticsearch, elastic-internal-init-filesystem (init), elastic-internal-suspend (init)

Here is the values.yml as you requested, perhaps you could test it on a mock install and confirm?

---
# Default values for eck-elasticsearch.
# This is a YAML-formatted file.

# Overridable names of the Elasticsearch resource.
# By default, this is the Release name set for the chart,
# followed by 'eck-elasticsearch'.
#
# nameOverride will override the name of the Chart with the name set here,
# so nameOverride: quickstart, would convert to '{{ Release.name }}-quickstart'
#
# nameOverride: "quickstart"
#
# fullnameOverride will override both the release name, and the chart name,
# and will name the Elasticsearch resource exactly as specified.
#
# fullnameOverride: "quickstart"

# Version of Elasticsearch.
#
version: 8.17.0

# Elasticsearch Docker image to deploy
#
# image:
image: docker.elastic.co/elasticsearch/elasticsearch:8.17.0

# Labels that will be applied to Elasticsearch.
#
labels: {}

# Annotations that will be applied to Elasticsearch.
#
annotations: {}

# Settings for configuring Elasticsearch users and roles.
# ref: https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-users-and-roles.html
#
auth: {}

# Settings for configuring stack monitoring.
# ref: https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-stack-monitoring.html
#
monitoring: {}
  # metrics:
  #   elasticsearchRefs:
  #   - name: monitoring
  #     namespace: observability
  # logs:
  #   elasticsearchRefs:
  #   - name: monitoring
  #     namespace: observability

# Control the Elasticsearch transport module used for internal communication between nodes.
# ref: https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-transport-settings.html
#
transport: {}
  # service:
  #   metadata:
  #     labels:
  #       my-custom: label
  #   spec:
  #     type: LoadBalancer
  # tls:
  #   subjectAltNames:
  #     - ip: 1.2.3.4
  #     - dns: hulk.example.com
  #   certificate:
  #     secretName: custom-ca

# Settings to control how Elasticsearch will be accessed.
# ref: https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-accessing-elastic-services.html
#
http: {}
  # service:
  #   metadata:
  #     labels:
  #       my-custom: label
  #   spec:
  #     type: LoadBalancer
  # tls:
  #   selfSignedCertificate:
  #     # To fully disable TLS for the HTTP layer of Elasticsearch, simply
  #     # set the below field to 'true', removing all other fields.
  #     disabled: false
  #     subjectAltNames:
  #       - ip: 1.2.3.4
  #       - dns: hulk.example.com
  #   certificate:
  #     secretName: custom-ca

# Control Elasticsearch Secure Settings.
# ref: https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-es-secure-settings.html#k8s-es-secure-settings
#
secureSettings: []
  # - secretName: one-secure-settings-secret
  # Projection of secret keys to specific paths
  # - secretName: gcs-secure-settings
  #   entries:
  #   - key: gcs.client.default.credentials_file                                                                                                                                                                                                                                          #   - key: gcs_client_1
  #     path: gcs.client.client_1.credentials_file
  #   - key: gcs_client_2
  #     path: gcs.client.client_2.credentials_file

# Settings for limiting the number of simultaneous changes to an Elasticsearch resource.
# ref: https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-update-strategy.html
#
updateStrategy: {}
  # changeBudget:
  #   maxSurge: 3
  #   maxUnavailable: 1

# Controlling of connectivity between remote clusters within the same kubernetes cluster.
# ref: https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-remote-clusters.html
#
remoteClusters: {}
  # - name: cluster-two
  #   elasticsearchRef:
  #     name: cluster-two
  #     namespace: ns-two

# VolumeClaimDeletePolicy sets the policy for handling deletion of PersistentVolumeClaims for all NodeSets.
# Possible values are DeleteOnScaledownOnly and DeleteOnScaledownAndClusterDeletion.
# By default, if not set or empty, the operator sets DeleteOnScaledownAndClusterDeletion.
#
volumeClaimDeletePolicy: ""

# Settings to limit the disruption when pods need to be rescheduled for some reason such as upgrades or routine maintenance.
# By default, if not set, the operator sets a budget that doesn't allow any pod to be removed in case the cluster is not green or if there is only one node of type `data` or `master`.
# In all other cases the default PodDisruptionBudget sets `minUnavailable` equal to the total number of nodes minus 1.
# To completely disable the pod disruption budget set `disabled` to true.
#
# podDisruptionBudget:
#   spec:
#     minAvailable: 2
#     selector:
#       matchLabels:
#         elasticsearch.k8s.elastic.co/cluster-name: quickstart
#   disabled: true

# Used to check access from the current resource to a resource (for ex. a remote Elasticsearch cluster) in a different namespace.
# Can only be used if ECK is enforcing RBAC on references.
#
# serviceAccountName: ""

# Number of revisions to retain to allow rollback in the underlying StatefulSets.
# By default, if not set, Kubernetes sets 10.
#
# revisionHistoryLimit: 2

# Node configuration settings.
# The node roles which can be configured here are:
# - "master"
# - "data_hot"
# - "data_cold"
# - "data_frozen"
# - "data_content"
# - "ml"
# - "ingest"
# ref: https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-node-configuration.html
#
nodeSets:
- name: default
  count: 1
  config:
    # Comment out when setting the vm.max_map_count via initContainer, as these are mutually exclusive.
    # For production workloads, it is strongly recommended to increase the kernel setting vm.max_map_count to 262144
    # and leave node.store.allow_mmap unset.
    # ref: https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-virtual-memory.html
    #
    node.store.allow_mmap: false
  podTemplate:
    # The following spec is exactly the Kubernetes Core V1 PodTemplateSpec. Any fields within the PodTemplateSpec
    # are supported within the 'spec' field below.  Please see below documentation for the exhaustive list of fields.
    #
    # https://v1-24.docs.kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#podtemplatespec-v1-core
    #
    # Only the commonly overridden/used fields will be noted below.
    #
    spec:

    # If specified, the pod's scheduling constraints
    # https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-advanced-node-scheduling.html
    # https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/
    # affinity:
    #   nodeAffinity:
    #     requiredDuringSchedulingIgnoredDuringExecution:
    #       nodeSelectorTerms:
    #       - matchExpressions:
    #         - key: topology.kubernetes.io/zone
    #           operator: In
    #           values:
    #           - antarctica-east1
    #           - antarctica-west1
    # Containers array.  Should only be used to customize the 'elasticsearch' container using the following fields.
      containers:
      - name: elasticsearch

        # List of environment variables to set in the 'elasticsearch' container.
        # https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/
        # env:
        # - name: "my-env-var"
        #   value: "my-value"

        # Compute Resources required by this container.
        resources:
          # Requests describes the minimum amount of compute resources required. If Requests is omitted for a container,
          # it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value.
          #
          # Defaults used by the ECK Operator, if not specified, are below
          limits:
            # cpu: 1
            memory: 2Gi
          requests:
            # cpu: 1
            memory: 2Gi

          # Example increasing both the requests and limits values:
          # limits:
          #   cpu: 4
          #   memory: 8Gi
          # requests:
          #   cpu: 1
          #   memory: 8Gi

        # SecurityContext defines the security options the container should be run with.
        # If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext.
        #
        # These typically are set automatically by the ECK Operator, and should only be adjusted
        # with the full knowledge of the effects of each field.
        #
        # securityContext:

          # Whether this container has a read-only root filesystem. Default is false.
          # readOnlyRootFilesystem: false

          # The GID to run the entrypoint of the container process. Uses runtime default if unset.
          # runAsGroup: 1000

          # Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure
          # that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed.
          # runAsNonRoot: true
          # The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified.
          # runAsUser: 1000

    # ImagePullSecrets is an optional list of references to secrets in the same namespace to use for pulling any of the images used by this PodSpec.
    # https://kubernetes.io/docs/concepts/containers/images#specifying-imagepullsecrets-on-a-pod
    # imagePullSecrets:
    # - name: "image-pull-secret"

    # List of initialization containers belonging to the pod.
    #
    # Common initContainers include setting sysctl, or in 7.x versions of Elasticsearch,
    # installing Elasticsearch plugins.
    #
    # https://kubernetes.io/docs/concepts/workloads/pods/init-containers/
#    initContainers:
#    - command:
#      - sh
#      - "-c"
#      - bin/elasticsearch-plugin install --batch https://download.support.siren.io/federate/8.15.5-37.1.zip
#      name: install-plugins
#      securityContext:
#        privileged: true
    initContainers:
    - command:
      - sh
      - "-c"
      - bin/elasticsearch-plugin install --batch https://download.support.siren.io/federate/8.15.5-37.1.zip
      name: install-plugins
      securityContext:
        privileged: true
      volumeMounts:
      - name: plugins
        mountPath: /usr/share/elasticsearch/plugins

    # NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node's labels for the pod to be scheduled on that node.
    # https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
    # https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-advanced-node-scheduling.html
    # nodeSelector:
    #   diskType: ssd
    #   environment: production

    # If specified, indicates the pod's priority. "system-node-critical" and "system-cluster-critical" are two special keywords which indicate the highest priorities with the former being the highest priority.
    # Any other name must be defined by creating a PriorityClass object with that name. If not specified, the pod priority will be default or zero if there is no default.
    # https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/
    # priorityClassName: ""

    # See previously defined 'securityContext' within 'podTemplate' for all available fields.
    # securityContext: {}
    # ServiceAccountName is the name of the ServiceAccount to use to run this pod.
    # https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
    # serviceAccountName: ""

    # Optional duration in seconds to wait for the Elasticsearch pod to terminate gracefully.
    # terminationGracePeriodSeconds: 30s

    # If specified, the pod's tolerations that will apply to all containers within the pod.
    # https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/
    # tolerations:
    # - key: "node-role.kubernetes.io/elasticsearch"
    #   effect: "NoSchedule"
    #   operator: "Exists"

    # TopologySpreadConstraints describes how a group of pods ought to spread across topology domains.
    # Scheduler will schedule pods in a way which abides by the constraints. All topologySpreadConstraints are ANDed.
    #
    # These settings are generally applied within each `nodeSets[].podTemplate` field to apply to a specific Elasticsearch nodeset.
    #
    # https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-advanced-node-scheduling.html
    # topologySpreadConstraints: {}

    # List of volumes that can be mounted by containers belonging to the pod.
    # https://kubernetes.io/docs/concepts/storage/volumes
    # volumes: []

# Settings for controlling Elasticsearch ingress. Enabling ingress will expose your Elasticsearch instance
# to the public internet, and as such is disabled by default.
#
# Each Cloud Service Provider has different requirements for setting up Ingress. Some links to common documentation are:
# - AWS:   https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html
# - GCP:   https://cloud.google.com/kubernetes-engine/docs/concepts/ingress
# - Azure: https://learn.microsoft.com/en-us/azure/aks/app-routing
# - Nginx: https://kubernetes.github.io/ingress-nginx/
#
ingress:
  enabled: false

  # Annotations that will be applied to the Ingress resource. Note that some ingress controllers are controlled via annotations.
  #
  # Nginx Annotations: https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/
  #
  # Common annotations:
  #   kubernetes.io/ingress.class: gce          # Configures the Ingress resource to use the GCE ingress controller and create an external Application Load Balancer.
  #   kubernetes.io/ingress.class: gce-internal # Configures the Ingress resource to use the GCE ingress controller and create an internal Application Load Balancer.
  #   kubernetes.io/ingress.class: nginx        # Configures the Ingress resource to use the NGINX ingress controller.
  #
  annotations: {}
  # Labels that will be applied to the Ingress resource.
  #
  labels: {}

  # Some ingress controllers require the use of a specific class name to route traffic to the correct controller, notably AKS and EKS, which
  # replaces the use of the 'kubernetes.io/ingress.class' annotation.
  #
  # className: webapprouting.kubernetes.azure.com | alb

  # Ingress paths are required to have a corresponding path type. Defaults to 'Prefix'.
  #
  # There are 3 supported path types:
  # - ImplementationSpecific
  # - Prefix
  # - Exact
  #
  # ref: https://kubernetes.io/docs/concepts/services-networking/ingress/#path-types
  #
  pathType: Prefix

  # Hosts are a list of hosts included in the Ingress definition, with a corresponding path at which the default Elasticsearch service
  # will be exposed. Each host in the list should be a fully qualified DNS name that will resolve to the exposed Ingress object.
  #
  # ref: https://kubernetes.io/docs/concepts/services-networking/ingress/#name-based-virtual-hosting
  #
  hosts:
    - host: chart-example.local
      path: /

  # TLS defines whether TLS will be enabled on the Ingress resource.
  #
  # *NOTE* Many Cloud Service Providers handle TLS in a custom manner, and as such, it is recommended to consult their documentation.
  # Notably GKE and Nginx Ingress Controllers seems to respect the Ingress TLS settings, AKS and EKS ignore it.
  #
  # - AKS:   https://learn.microsoft.com/en-us/azure/aks/app-routing-dns-ssl
  # - GKE:   https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#options_for_providing_ssl_certificates
  # - EKS:   https://aws.amazon.com/blogs/containers/serve-distinct-domains-with-tls-powered-by-acm-on-amazon-eks/
  # - Nginx: https://kubernetes.github.io/ingress-nginx/user-guide/tls/
  #
  # Kubernetes ingress TLS documentation:
  # ref: https://kubernetes.io/docs/concepts/services-networking/ingress/#tls
  #
  tls:
    enabled: false
    # Optional Kubernetes secret name that contains a base64 encoded PEM certificate and private key that corresponds to the above 'hosts' definitions.
    # If tls is enabled, but this field is not set, the self-signed certificate and key created by the ECK operator will be used.
    # secretName: chart-example-tls
---

Hi Traiano,

I noticed an issue with version compatibility between Elasticsearch and the Federate plugin in your values.yaml. The Federate plugin needs to be compatible with the same version of Elasticsearch.

In your current configuration, I see the following:

# Version of Elasticsearch.
#
version: 8.17.0

# Elasticsearch Docker image to deploy
#
# image:
image: docker.elastic.co/elasticsearch/elasticsearch:8.17.0

However, the Federate plugin you are using is:

- bin/elasticsearch-plugin install --batch https://download.support.siren.io/federate/8.15.5-37.1.zip

To resolve this issue, please change the Elasticsearch version to 8.15.5, which is compatible with the Federate plugin version you’re using, and try again.

Additionally if it doesn’t work then, I suggest adding a volumes section in the nodeSets.podTemplate.spec to define the volume for the plugin installation:

volumes:
- name: plugins
  emptyDir: {}

Let me know how it goes.

Regards
Manu

Hi Manu

Thanks for the tip. I re-deployed again with the following changes:

# Version of Elasticsearch.
#
#version: 8.17.0
version: 8.15.5

# Elasticsearch Docker image to deploy
#
# image:
image: docker.elastic.co/elasticsearch/elasticsearch:8.15.5
  • Plugin and plugin storage configuration:
    initContainers:
    - command:
      - sh
      - "-c"
      - /usr/share/elasticsearch/bin/elasticsearch-plugin install --batch https://download.support.siren.io/federate/8.15.5-37.1.zip
      name: install-plugins
      securityContext:
        privileged: true
      volumeMounts:
      - name: plugins
        mountPath: /usr/share/elasticsearch/plugins

    volumes:
    - name: plugins
      emptyDir: {}
  • To test further I exec’ed into the container to check installation manually was ok:
elasticsearch@eck-elasticsearch-es-default-0:/$ /usr/share/elasticsearch/bin/elasticsearch-plugin install --batch <url>/federate/8.15.5-37.1.zip
-> Installing <the url>federate/8.15.5-37.1.zip
-> Downloading <url>https://download.support.siren.io/federate/8.15.5-37.1.zip
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@     WARNING: plugin requires additional permissions     @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
* java.lang.RuntimePermission accessClassInPackage.sun.misc
* java.lang.RuntimePermission accessClassInPackage.sun.misc.*
* java.lang.RuntimePermission accessClassInPackage.sun.security.provider
* java.lang.RuntimePermission accessDeclaredMembers
* java.lang.RuntimePermission getClassLoader
* java.lang.RuntimePermission loadLibrary.*
* java.lang.RuntimePermission setContextClassLoader
* java.lang.reflect.ReflectPermission suppressAccessChecks
* java.net.NetPermission getProxySelector
* java.net.SocketPermission * connect,resolve
* java.security.SecurityPermission insertProvider
* java.security.SecurityPermission putProviderProperty.*
* java.util.PropertyPermission * read,write
See https://docs.oracle.com/javase/8/docs/technotes/guides/security/permissions.html
for descriptions of what these permissions allow and the associated risks.
-> Installed siren-federate
-> Please restart Elasticsearch to activate any plugins installed
elasticsearch@eck-elasticsearch-es-default-0:/$
  • So installation command itself seems fine. However, the test via API still shows nothing:
root@nginx:/# curl -k -u "elastic:GT7o4jAg8a2CV30wG67r53Rr" -X GET https://<endpoint url>.elastic:9200/_cat/plugins?v
name component version
root@nginx:/#

Is there additional debugging I could turn on to find out what’s going wrong at pod creation time?

Thanks in advance!
Traiano

Hi Traiano,

You could try to do this to troubleshoot it why it’s not getting installed during POD creation:

  1. Enable Detailed Logs for initContainer

Logs from initContainer: Ensure you are capturing detailed logs from the initContainer. Use kubectl logs to check both initContainer and main container logs.

kubectl logs <pod-name> -n <namespace> -c install-plugins

Log Command Output: Modify the initContainer to output detailed logs during the plugin installation. For example, capture both stdout and stderr:

initContainers:
  - name: install-plugins
    image: docker.elastic.co/elasticsearch/elasticsearch:8.15.5
    command:
      - sh
      - "-c"
      - |
        set -x  # Enable script debugging
        /usr/share/elasticsearch/bin/elasticsearch-plugin install --batch https://download.support.siren.io/federate/8.15.5-37.1.zip
        echo "Plugin installation completed"
    volumeMounts:
      - name: plugins
        mountPath: /usr/share/elasticsearch/plugins
    securityContext:
      privileged: true

2. Check Kubernetes Events

  • Pod Events: Use Kubernetes events to look for any warnings or errors during the pod creation time. This can highlight issues related to volume mounts, security contexts, or other deployment problems.
kubectl describe pod <pod-name> -n <namespace>
  • Event Logs: Check for Warning or Failed events related to volume mounts, container starts, or initContainer execution.

Please try these troubleshooting steps and let me know how it goes.

Regards
Manu

Hi Manu

I’ve gone through these steps with nothing visible in the vent logs, pod logs or via the curl api, unfortunately.

It seems that the “initContainer” process is not being triggered at all - there is complete silence on that part.

I wonder if the configuration in values.yaml is being picked up at all (any way to check?).

So far the configuration for initContainer looks like this:

    initContainers:
      - name: install-plugins
        image: docker.elastic.co/elasticsearch/elasticsearch:8.15.5
        command:
          - sh
          - "-c"
          - |
            set -x
            /usr/share/elasticsearch/bin/elasticsearch-plugin install --batch https://download.support.siren.io/federate/8.15.5-37.1.zip
            echo "plugin installation completed for siren-federate ..."
        volumeMounts:
          - name: plugins
            mountPath: /usr/share/elasticsearch/plugins
        securityContext:
          privileged: true

    volumes:
    - name: plugins
      emptyDir: {}

Question: could we try something other than emptyDir to host the plugins? Perhaps a persistent volume?

Hi @Manu_Agarwal , Siren Support

I’ve managed to get the plugins installed.

However, I’ve now run into a compatibility issue between the siren investigate version and the installed version of federate.

{"type":"log","@timestamp":"2025-01-14T12:41:40Z","tags":["info","Investigate"],"pid":480,"message":"Target Federate version: 7.17.1-27.1"}
{"type":"log","@timestamp":"2025-01-14T12:41:40Z","tags":["status","plugin:elasticsearch@12.1.4","info"],"pid":480,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2025-01-14T12:41:40Z","tags":["status","plugin:elasticsearch@12.1.4","error"],"pid":480,"state":"red","message":"Status changed from yellow to red - Siren Investigate 12.1.4 requires all nodes to pass the compatibility matrix. The following incompatible nodes were found in your cluster: \nvundefined @ (undefined) || The Federate installed is higher than the target version: 8.7.1-31.2, please upgrade Investigate or install a supported version of Federate.","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
{"type":"log","@timestamp":"2025-01-14T12:41:43Z","tags":["status","plugin:elasticsearch@12.1.4","error"],"pid":480,"state":"red","message":"Status changed from red to red - Unable to ensure that rest.action.multi.allow_explicit_index=true: [parse_exception] failed to parse multi get request. unknown field [_type]","prevState":"red","prevMsg":"Siren Investigate 12.1.4 requires all nodes to pass the compatibility matrix. The following incompatible nodes were found in your cluster: \nvundefined @ (undefined) || The Federate installed is higher than the target version: 8.7.1-31.2, please upgrade Investigate or install a supported version of Federate."}

Could you please recommend the correct version of siren-investigate to use with this version of the federate plugin which we have running on elasticsearch?

Current versions:

  1. Elasticsearch: 8.7.1
  2. Siren-federate plugin: https://download.support.siren.io/federate/8.7.1-31.2.zip
  3. Siren Investigate: “image: sirensolutions/siren-investigate:12.1.4”

Hi Traiano,

I have found the issue you were facing and it was the version compatibility issue the version you are trying to install of Elastic was not available on Helm repo , you can verify if with this command:

helm search repo elastic/elasticsearch --versions

This will return a list of available Elasticsearch versions in the Elastic Helm repository and corresponding Siren Federate supported version need to use.

We have tested with ES 7.17.1 with this values.yaml file to use:

clusterName: "elasticsearch"
nodeGroup: "master"

masterService: ""

# Set roles to only master for 1-node setup
roles:
  master: "true"
  ingest: "false"
  data: "false"
  remote_cluster_client: "false"
  ml: "false"

replicas: 1
minimumMasterNodes: 1

esMajorVersion: ""

clusterDeprecationIndexing: "false"

# Allow you to add any config files such as elasticsearch.yml and log4j2.properties
esConfig: {}

esJvmOptions: {}

# Set the resources for the 1-node setup
resources:
  requests:
    cpu: "1000m"
    memory: "2Gi"
  limits:
    cpu: "1000m"
    memory: "2Gi"

volumeClaimTemplate:
  accessModes: ["ReadWriteOnce"]
  resources:
    requests:
      storage: 30Gi

# Configure the plugin installation in the init container
extraVolumes:
  - name: plugins
    emptyDir: {}

extraVolumeMounts:
  - name: plugins
    mountPath: /usr/share/elasticsearch/plugins

extraInitContainers:
  - name: install-siren-federate
    image: docker.elastic.co/elasticsearch/elasticsearch:7.17.1
    command:
      - /bin/bash
      - -c
      - |
        set -e
        # Install the Siren Federate Plugin
        /usr/share/elasticsearch/bin/elasticsearch-plugin install --batch https://download.support.siren.io/federate/7.17.1-27.2.zip
        # Restart Elasticsearch to apply plugin
        kill -SIGTERM 1
    volumeMounts:
      - name: plugins
        mountPath: /usr/share/elasticsearch/plugins

# Other general configurations
networkHost: "0.0.0.0"
httpPort: 9200
transportPort: 9300

service:
  enabled: true
  type: ClusterIP
  annotations: {}

updateStrategy: RollingUpdate
maxUnavailable: 1

podSecurityContext:
  fsGroup: 1000
  runAsUser: 1000

securityContext:
  capabilities:
    drop:
      - ALL
  runAsNonRoot: true
  runAsUser: 1000

sysctlVmMaxMapCount: 262144

readinessProbe:
  failureThreshold: 3
  initialDelaySeconds: 10
  periodSeconds: 10
  successThreshold: 3
  timeoutSeconds: 5

clusterHealthCheckParams: "wait_for_status=green&timeout=1s"

imagePullSecrets: []
nodeSelector: {}
tolerations: []

ingress:
  enabled: false

Run this command to deploy the ES with Helm

helm upgrade --install elasticsearch-master elastic/elasticsearch --version 7.17.1 -f values.yaml -n elasticsearch

Verify the POD status:

kubectl get pods -n elasticsearch

Check Logs for Plugin Installation

kubectl logs elasticsearch-master-0 -n elasticsearch

Verify Cluster Health:

kubectl exec -it elasticsearch-master-0 -n elasticsearch -- curl -X GET "localhost:9200/_cluster/health?pretty"

Verify Installed Plugins

kubectl exec -it elasticsearch-master-0 -n elasticsearch -- bin/elasticsearch-plugin list

These steps will ensure the issue should be resolved.

Regards
Manu

Hi Traiano,

About the last issue you commented:

Please verify every ES cluster have the same version of ES and Federate plugin is installed and use the latest Siren Investigate image as the one you are using is not supported.

Siren is currently on Siren 14.5.0 , you can pull the image from here:

docker pull sirensolutions/siren-investigate:14.5.0

Regards
Manu

Hi Greg, Support

We’ve been a bit slow in evaluating our Siren+Senzing integration evaluation and I see the license is expiring in 3 days.

Please could we have an extension of the license for perhaps 1 – 2 weeks?

This will allow us to complete our evaluation of the siren integration with senzing.

Many thanks in advance,

Traiano

Hi Traiano,

Sure we will share the license through our support portal.

Regards
Manu

Thanks!

Let me know when it’s ready.

Cheers,

Traiano

Hi Traiano,

License shared with you through support channel.

Regards
Manu

Hi Manu

Thanks, license applied:


root@nginx:/tmp# ./license.sh

+ username=sirenadmin

+ password=REDACTED

+ ca_cert_file=na

+ licenseFile=./accenture-extended-trial-siren.bin

+ elasticsearchURL=https://eck-elasticsearch-es-http.elastic:9200

+ curl -k -XPUT -u sirenadmin:2HZ4hW93W990j9aMg4IYpB8o --header 'Content-Type: application/json' -T ./accenture-extended-trial-siren.bin [https://eck-elasticsearch-es-http.elastic:9200/_siren/license](https://eck-elasticsearch-es-http.elastic:9200/_siren/license)

{"acknowledged":true}

Regards,

Traiano

1 Like