Usage

Since Apache Hdfs is installed in high-availability mode, an Apache Zookeeper cluster is required to coordinate the active/passive namenode.

Install the Stackable Zookeeper operator and an Apache Zookeeper cluster like this:

helm install zookeeper-operator stackable/zookeeper-operator
cat <<EOF | kubectl apply -f -
apiVersion: zookeeper.stackable.tech/v1alpha1
kind: ZookeeperCluster
metadata:
  name: simple-zk
spec:
  version: 3.5.8-stackable0.7.0
  servers:
    roleGroups:
      default:
        replicas: 3
        config: {}
---
apiVersion: zookeeper.stackable.tech/v1alpha1
kind: ZookeeperZnode
metadata:
  name: simple-znode
spec:
  clusterRef:
    name: simple-zk
    namespace: default
EOF

Once a Zookeeper cluster and the operator are up and running, you can create an Apache HDFS cluster like shown below. Please note that the version you need to specify is not only the version of Hadoop which you want to roll out, but has to be amended with a Stackable version as shown. This Stackable version is the version of the underlying container image which is used to execute the processes. For a list of available versions please check our image registry. It should generally be safe to simply use the latest image version that is available.

cat <<EOF | kubectl apply -f -
apiVersion: hdfs.stackable.tech/v1alpha1
kind: HdfsCluster
metadata:
  name: simple
spec:
  version: 3.3.3-stackable0.1.0
  zookeeperConfigMapName: simple-znode
  dfsReplication: 3
  log4j: |-
    # Define some default values that can be overridden by system properties
    hadoop.root.logger=INFO,console
    hadoop.log.dir=.
    hadoop.log.file=hadoop.log
    # Define the root logger to the system property "hadoop.root.logger".
    log4j.rootLogger=${hadoop.root.logger}, EventCounter
    # Logging Threshold
    log4j.threshold=ALL
    log4j.appender.console=org.apache.log4j.ConsoleAppender
    log4j.appender.console.target=System.err
    log4j.appender.console.layout=org.apache.log4j.PatternLayout
    log4j.appender.console.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n
    log4j.appender.EventCounter=org.apache.hadoop.log.metrics.EventCounter
  nameNodes:
    roleGroups:
      default:
        selector:
          matchLabels:
            kubernetes.io/os: linux
        replicas: 2
  dataNodes:
    roleGroups:
      default:
        selector:
          matchLabels:
            kubernetes.io/os: linux
        replicas: 3
  journalNodes:
    roleGroups:
      default:
        selector:
          matchLabels:
            kubernetes.io/os: linux
        replicas: 3
EOF
When scaling namenodes up, make sure to increase the replica count only by one and not more nodes at once.

Monitoring

The managed HDFS instances are automatically configured to export Prometheus metrics. See Monitoring for more details.

Configuration & Environment Overrides

The cluster definition also supports overriding configuration properties and environment variables, either per role or per role group, where the more specific override (role group) has precedence over the less specific one (role).

Overriding certain properties can lead to faulty clusters. In general this means, do not change ports, hostnames or properties related to data dirs, high-availability or security.

Configuration Properties

For a role or role group, at the same level of config, you can specify configOverrides for the hdfs-site.xml and core-site.xml. For example, if you want to set additional properties on the namenode servers, adapt the nameNodes section of the cluster resource like so:

nameNodes:
  roleGroups:
    default:
      config: [...]
      configOverrides:
        core-site.xml:
          fs.trash.interval: "5"
        hdfs-site.xml:
          dfs.namenode.num.checkpoints.retained: "3"
      replicas: 2

Just as for the config, it is possible to specify this at role level as well:

nameNodes:
  configOverrides:
    core-site.xml:
      fs.trash.interval: "5"
    hdfs-site.xml:
      dfs.namenode.num.checkpoints.retained: "3"
  roleGroups:
    default:
      config: [...]
      replicas: 2

All override property values must be strings. The properties will be formatted and escaped correctly into the XML file.

For a full list of configuration options we refer to the Apache Hdfs documentation for hdfs-site.xml and core-site.xml

Environment Variables

In a similar fashion, environment variables can be (over)written. For example per role group:

nameNodes:
  roleGroups:
    default:
      config: {}
      envOverrides:
        MY_ENV_VAR: "MY_VALUE"
      replicas: 1

or per role:

nameNodes:
  envOverrides:
    MY_ENV_VAR: "MY_VALUE"
  roleGroups:
    default:
      config: {}
      replicas: 1
Some environment variables will be overriden by the operator and cannot be set manually by the user. These are HADOOP_HOME, HADOOP_CONF_DIR, POD_NAME and ZOOKEEPER.

Storage for data volumes

You can mount volumes where data is stored by specifiying PersistentVolumeClaims for each individual role group:

dataNodes:
  roleGroups:
    default:
      config:
        resources:
          storage:
            data:
              capacity: 128Gi

In the above example, all data nodes in the default group will store data (the location of dfs.datanode.name.dir) on a 128Gi volume.

By default, in case nothing is configured in the custom resource for a certain role group, each Pod will have a 1Gi large local volume mount for the data location.

Memory requests

You can request a certain amount of memory for each individual role group as shown below:

nameNodes:
  roleGroups:
    default:
      config:
        resources:
          memory:
            limit: '2Gi'

In this example, each namenode container in the "default" group will have a maximum of 64 megabytes of memory. To be more precise, these memory limits apply to the containers running the name node daemons but not to any sidecar containers that are part of the namenod’s pod.

Setting this property will automatically also set the maximum Java heap size for the corresponding process to 80% of the available memory. Be aware that if the memory constraint is too low, the cluster might fail to start. If pods terminate with an 'OOMKilled' status and the cluster doesn’t start, try increasing the memory limit.

For more details regarding Kubernetes memory requests and limits see: Assign Memory Resources to Containers and Pods.

CPU requests

Similarly to memory resources, you can also configure CPU limits, as shown below:

dataNodes:
  roleGroups:
    default:
      config:
        resources:
          cpu:
            max: '500m'
            min: '250m'

For more details regarding Kubernetes CPU limits see: Assign CPU Resources to Containers and Pods.