Skip to content

Latest commit

 

History

History
64 lines (49 loc) · 1.68 KB

README.md

File metadata and controls

64 lines (49 loc) · 1.68 KB

Experimental Support of Vertical Federated XGBoost using NVFlare

This directory contains a demo of Vertical Federated Learning using NVFlare.

Training with CPU only

To run the demo, first build XGBoost with the federated learning plugin enabled (see the README).

Install NVFlare:

pip install nvflare

Prepare the data (note that this step will download the HIGGS dataset, which is 2.6GB compressed, and 7.5GB uncompressed, so make sure you have enough disk space and are on a fast internet connection):

./prepare_data.sh

Start the NVFlare federated server:

/tmp/nvflare/poc/server/startup/start.sh

In another terminal, start the first worker:

/tmp/nvflare/poc/site-1/startup/start.sh

And the second worker:

/tmp/nvflare/poc/site-2/startup/start.sh

Then start the admin CLI:

/tmp/nvflare/poc/admin/startup/fl_admin.sh

In the admin CLI, run the following command:

submit_job vertical-xgboost

Once the training finishes, the model file should be written into /tmp/nvlfare/poc/site-1/run_1/test.model.json and /tmp/nvflare/poc/site-2/run_1/test.model.json respectively.

Finally, shutdown everything from the admin CLI, using admin as password:

shutdown client
shutdown server

Training with GPUs

To demo with Vertical Federated Learning using GPUs, make sure your machine has at least 2 GPUs. Build XGBoost with the federated learning plugin enabled along with CUDA (see the README).

Modify ../config/config_fed_client.json and set use_gpus to true, then repeat the steps above.