Running Multiple Instances of the Spark Operator
Running Multiple Instances of the Spark Operator within the Same K8s Cluster
If you need to run multiple instances of the Spark operator within the same k8s cluster, then you need to ensure that the running instances should not watch the same spark job namespace.
For example, you can deploy two Spark operator instances in the spark-operator
namespace, one with release name spark-operator-1
which watches the spark-1
namespace:
# Create the spark-1 namespace if it does not exist
kubectl create ns spark-1
# Install the Spark operator with release name spark-operator-1
helm install spark-operator-1 spark-operator/spark-operator \
--namespace spark-operator \
--create-namespace \
--set 'spark.jobNamespaces={spark-1}'
And then deploy another one with release name spark-operator-2
which watches the spark-2
namespace:
# Create the spark-2 namespace if it does not exist
kubectl create ns spark-2
# Install the Spark operator with release name spark-operator-2
helm install spark-operator-2 spark-operator/spark-operator \
--namespace spark-operator \
--create-namespace \
--set 'spark.jobNamespaces={spark-2}'
Feedback
Was this page helpful?
Thank you for your feedback!
We're sorry this page wasn't helpful. If you have a moment, please share your feedback so we can improve.
Last modified September 28, 2024: Update the user guide to run multiple Spark operator instances (#3884) (ed65cba)