Installation
There are two ways to run the HDFS Operator:
-
Using Helm.
-
Build from source.
Using Helm
Helm allows you to download and deploy Stackable operators on Kubernetes and is by far the easiest installation method. First ensure that you have installed the Stackable Operators Helm repository:
helm repo add stackable https://repo.stackable.tech/repository/helm-dev/
helm repo update stackable
Then install the Stackable Operator for Apache Hadoop
helm install hdfs-operator stackable/hdfs-operator
Helm will deploy the operator in a Kubernetes container and apply the CRDs for the Apache Hdfs service. You’re now ready to deploy Apache Hdfs in Kubernetes as described in Usage.
Build from source
For development, testing and debugging purposes it is useful to be able to deploy a locally modified operator without the need to publish a container image and/or a helm chart.
Requirements:
* A recent Rust toolchain to compile the sources. Version 1.58 is the latest at the time of this writing.
* Docker to build the image.
* Optionally a local Kubernetes cluster like kind
to run the operator and the Hdfs cluster.
cargo build
cp target/debug/stackable-hdfs-operator .
docker build -t docker.stackable.tech/stackable/hdfs-operator:0.3.0-nightly -f docker/Dockerfile.devel .
rm stackable-hdfs-operator
The image can then be loaded in local kind
cluster like this:
kind load docker-image docker.stackable.tech/stackable/hdfs-operator:0.3.0-nightly --name hdfs
and the operator can be deployed by using the local Helm chart:
helm install hdfs-operator deploy/helm/hdfs-operator/
Now you can proceed to install a custom Apache Hdfs cluster as described in Usage.