Usage
If you are not installing the operator using Helm then after installation the CRD for this operator must be created:
kubectl apply -f deploy/nificluster.yaml
To create a three-node Apache NiFi cluster with SingleUser authentication enabled apply the following to your Kubernetes cluster.
Please note that the version you need to specify is not only the version of NiFi which you want to roll out, but has to be amended with a Stackable version as shown. This Stackable version is the version of the underlying container image which is used to execute the processes. For a list of available versions please check our image registry. It should generally be safe to simply use the latest image version that is available.
The admin credentials that you can then log in with are: admin:supersecretpassword
If you do not provide the admin user credentials the operator can auto generate a random password for you, if you enable this functionality. You can retrieve this password by running the following command:
kubectl get secret nifi-admin-credentials-simple -o jsonpath="{.data.password}" | base64 --decode | cat - <(echo)
You may need to adjust this command if you change the configuration for adminCredentialsSecret
in the example below.
apiVersion: nifi.stackable.tech/v1alpha1
kind: NifiCluster
metadata:
name: simple-nifi
spec:
version: 1.16.3-stackable0.1.0
zookeeperConfigMapName: simple-nifi-znode
config:
authentication:
method:
singleUser:
adminCredentialsSecret: nifi-admin-credentials-simple
sensitiveProperties:
keySecret: nifi-sensitive-property-key
nodes:
roleGroups:
default:
selector:
matchLabels:
kubernetes.io/os: linux
config:
log:
rootLogLevel: INFO
replicas: 3
If you want to set a password for the initial admin user you can do this by applying the following object:
apiVersion: v1
kind: Secret
metadata:
name: nifi-admin-credentials-simple
stringData:
username: admin
password: supersecretpassword
You can create the ZNode config map referenced in zookeeperConfigMapName
via (assuming there is a ZooKeeper cluster called simple-zk
:
apiVersion: zookeeper.stackable.tech/v1alpha1
kind: ZookeeperZnode
metadata:
name: simple-nifi-znode
spec:
clusterRef:
name: simple-zk
Monitoring
The managed NiFi instances are automatically configured to export Prometheus metrics. See Monitoring for more details.
Configuration & Environment Overrides
The cluster definition also supports overriding configuration properties and environment variables, either per role or per role group, where the more specific override (role group) has precedence over the less specific one (role).
Do not override port numbers. This will lead to cluster malfunction. |
Environment Variables
Environment variables can be (over)written by adding the envOverrides
property.
For example per role group:
nodes:
roleGroups:
default:
config: {}
replicas: 1
envOverrides:
MY_ENV_VAR: "MY_VALUE"
or per role:
nodes:
envOverrides:
MY_ENV_VAR: "MY_VALUE"
roleGroups:
default:
config: {}
replicas: 1
Volume storage
By default, a Nifi cluster will create five different persistent volume claims for flow files, provenance, database, content and state folders. These PVCs will request 2Gi
. It is recommended that you configure these volume requests according to your needs.
Storage requests can be configured at role or group level, for one or more of the persistent volumes as follows:
nodes:
roleGroups:
default:
config:
resources:
storage:
flowfile_repo:
capacity: 12Gi
provenance_repo:
capacity: 12Gi
database_repo:
capacity: 12Gi
content_repo:
capacity: 12Gi
state_repo:
capacity: 12Gi
In the above example, all nodes in the default group will request 12Gi
of storage the various folders.
Memory requests
You can request a certain amount of memory for each individual role group as shown below:
nodes:
roleGroups:
default:
config:
resources:
memory:
limit: '2Gi'
In this example, each node container in the "default" group will have a maximum of 2Gi
of memory.
Setting this property will automatically also set the maximum Java heap size for the corresponding process to 80% of the available memory. Be aware that if the memory constraint is too low, the cluster might fail to start. If pods terminate with an 'OOMKilled' status and the cluster doesn’t start, try increasing the memory limit.
For more details regarding Kubernetes memory requests and limits see: Assign Memory Resources to Containers and Pods.
CPU requests
Similarly to memory resources, you can also configure CPU limits, as shown below:
nodes:
roleGroups:
default:
config:
resources:
cpu:
max: '500m'
min: '250m'
For more details regarding Kubernetes CPU limits see: Assign CPU Resources to Containers and Pods.