Activate on Job ComputeΒΆ

You can activate the Xonai Accelerator on job compute clusters after you have installed Xonai in your Workspace.

The first step is to define the Xonai init script you created:

  1. Navigate to Job Runs > Jobs from the left navigation panel in Data Engineering section.

  2. Click in the job you want to activate the Xonai Accelerator.

  3. Click Compute > Configure in the right panel.

  4. Click Advanced options to expand the section and then click Init Scripts tab.

  5. Select Workspace as Source and then select the Xonai init script you created.

On completion it should look like this:

../_images/dbx-xonai-init-script.png

The second step is to define the required configuration for your Spark jobs:

  1. Close the compute configuration window and click Tasks in the top left corner menu.

  2. Copy the recommended configuration for on your cluster instance type and add it to Parameters JSON configuration as extra parameters, such as:

    "--conf", "spark.plugins=com.xonai.spark.SQLPlugin",
    "--conf", "spark.executor.memory=<recommended memory>",
    "--conf", "spark.executor.memoryOverhead=<recommended overhead memory>"
    

    Warning

    Unlike in all-purpose compute clusters, Databricks will not correctly forward spark.executor.memory from the Spark config from the compute configuration text box.

Your job is now configured to run with the Xonai Accelerator.