File: saving_model.rst

package info (click to toggle)
xgboost 3.0.0-1
  • links: PTS, VCS
  • area: main
  • in suites: forky, sid, trixie
  • size: 13,796 kB
  • sloc: cpp: 67,502; python: 35,503; java: 4,676; ansic: 1,426; sh: 1,320; xml: 1,197; makefile: 204; javascript: 19
file content (265 lines) | stat: -rw-r--r-- 11,321 bytes parent folder | download | duplicates (2)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
########################
Introduction to Model IO
########################

Since 2.1.0, the default model format for XGBoost is the UBJSON format, the option is
enabled for serializing models to file, serializing models to buffer, and for memory
snapshot (pickle and alike).

In XGBoost 1.0.0, we introduced support of using `JSON
<https://www.json.org/json-en.html>`_ for saving/loading XGBoost models and related
hyper-parameters for training, aiming to replace the old binary internal format with an
open format that can be easily reused.  Later in XGBoost 1.6.0, additional support for
`Universal Binary JSON <https://ubjson.org/>`__ is added as an optimization for more
efficient model IO, which is set to default in 2.1.

JSON and UBJSON have the same document structure with different representations, and we
will refer them collectively as the JSON format. This tutorial aims to share some basic
insights into the JSON serialisation method used in XGBoost.  Without explicitly
mentioned, the following sections assume you are using the one of the 2 outputs formats,
which can be enabled by providing the file name with ``.json`` (or ``.ubj`` for binary
JSON) as file extension when saving/loading model: ``booster.save_model('model.json')``.
More details below.

Before we get started, XGBoost is a gradient boosting library with focus on tree model,
which means inside XGBoost, there are 2 distinct parts:

1. The model consisting of trees and
2. Hyperparameters and configurations used for building the model.

If you come from Deep Learning community, then it should be
clear to you that there are differences between the neural network structures composed of
weights with fixed tensor operations, and the optimizers (like RMSprop) used to train them.

So when one calls ``booster.save_model`` (``xgb.save`` in R), XGBoost saves the trees,
some model parameters like number of input columns in trained trees, and the objective
function, which combined to represent the concept of "model" in XGBoost.  As for why are
we saving the objective as part of model, that's because objective controls transformation
of global bias (called ``base_score`` in XGBoost) and task-specific information.  Users
can share this model with others for prediction, evaluation or continue the training with
a different set of hyper-parameters etc.

However, this is not the end of story.  There are cases where we need to save something
more than just the model itself.  For example, in distributed training, XGBoost performs
checkpointing operation.  Or for some reasons, your favorite distributed computing
framework decide to copy the model from one worker to another and continue the training in
there.  In such cases, the serialisation output is required to contain enough information
to continue previous training without user providing any parameters again.  We consider
such scenario as **memory snapshot** (or memory based serialisation method) and distinguish it
with normal model IO operation. Currently, memory snapshot is used in the following places:

* Python package: when the ``Booster`` object is pickled with the built-in ``pickle`` module.
* R package: when the ``xgb.Booster`` object is persisted with the built-in functions ``saveRDS``
  or ``save``.
* JVM packages: when the ``Booster`` object is serialized with the built-in functions ``saveModel``.

Other language bindings are still working in progress.

.. note::

  The old binary format doesn't distinguish difference between model and raw memory
  serialisation format, it's a mix of everything, which is part of the reason why we want
  to replace it with a more robust serialisation method.  JVM Package has its own memory
  based serialisation methods.

To enable JSON format support for model IO (saving only the trees and objective), provide
a filename with ``.json`` or ``.ubj`` as file extension, the latter is the extension for
`Universal Binary JSON <https://ubjson.org/>`__

.. code-block:: python
  :caption: Python

  bst.save_model('model_file_name.json')

.. code-block:: r
  :caption: R

  xgb.save(bst, 'model_file_name.json')

.. code-block:: Scala
  :caption: Scala

  val format = "json"  // or val format = "ubj"
  model.write.option("format", format).save("model_directory_path")

.. note::

  Only load models from JSON files that were produced by XGBoost. Attempting to load
  JSON files that were produced by an external source may lead to undefined behaviors
  and crashes.

While for memory snapshot, UBJSON is the default starting with xgboost 1.6. When loading
the model back, XGBoost recognizes the file extensions ``.json`` and ``.ubj``, and can
dispatch accordingly. If the extension is not specified, XGBoost tries to guess the right
one.

***************************************************************
A note on backward compatibility of models and memory snapshots
***************************************************************

**We guarantee backward compatibility for models but not for memory snapshots.**

Models (trees and objective) use a stable representation, so that models produced in earlier
versions of XGBoost are accessible in later versions of XGBoost. **If you'd like to store or archive
your model for long-term storage, use** ``save_model`` (Python) and ``xgb.save`` (R).

On the other hand, memory snapshot (serialisation) captures many stuff internal to XGBoost, and its
format is not stable and is subject to frequent changes. Therefore, memory snapshot is suitable for
checkpointing only, where you persist the complete snapshot of the training configurations so that
you can recover robustly from possible failures and resume the training process. Loading memory
snapshot generated by an earlier version of XGBoost may result in errors or undefined behaviors.
**If a model is persisted with** ``pickle.dump`` (Python) or ``saveRDS`` (R), **then the model may
not be accessible in later versions of XGBoost.**

.. _custom-obj-metric:

***************************
Custom objective and metric
***************************

XGBoost accepts user provided objective and metric functions as an extension.  These
functions are not saved in model file as they are language dependent features.  With
Python, user can pickle the model to include these functions in saved binary.  One
drawback is, the output from pickle is not a stable serialization format and doesn't work
on different Python version nor XGBoost version, not to mention different language
environments.  Another way to workaround this limitation is to provide these functions
again after the model is loaded. If the customized function is useful, please consider
making a PR for implementing it inside XGBoost, this way we can have your functions
working with different language bindings.

******************************************************
Loading pickled file from different version of XGBoost
******************************************************

As noted, pickled model is neither portable nor stable, but in some cases the pickled
models are valuable.  One way to restore it in the future is to load it back with that
specific version of Python and XGBoost, export the model by calling `save_model`.

A similar procedure may be used to recover the model persisted in an old RDS file. In R,
you are able to install an older version of XGBoost using the ``remotes`` package:

.. code-block:: r

  library(remotes)
  remotes::install_version("xgboost", "0.90.0.1")  # Install version 0.90.0.1

Once the desired version is installed, you can load the RDS file with ``readRDS`` and recover the
``xgb.Booster`` object. Then call ``xgb.save`` to export the model using the stable representation.
Now you should be able to use the model in the latest version of XGBoost.

********************************************************
Saving and Loading the internal parameters configuration
********************************************************

XGBoost's ``C API``, ``Python API`` and ``R API`` support saving and loading the internal
configuration directly as a JSON string.  In Python package:

.. code-block:: python

  bst = xgboost.train(...)
  config = bst.save_config()
  print(config)


or in R:

.. code-block:: R

  config <- xgb.config(bst)
  print(config)

Will print out something similar to (not actual output as it's too long for demonstration):

.. code-block:: javascript

    {
      "Learner": {
        "generic_parameter": {
          "device": "cuda:0",
          "gpu_page_size": "0",
          "n_jobs": "0",
          "random_state": "0",
          "seed": "0",
          "seed_per_iteration": "0"
        },
        "gradient_booster": {
          "gbtree_train_param": {
            "num_parallel_tree": "1",
            "process_type": "default",
            "tree_method": "hist",
            "updater": "grow_gpu_hist",
            "updater_seq": "grow_gpu_hist"
          },
          "name": "gbtree",
          "updater": {
            "grow_gpu_hist": {
              "gpu_hist_train_param": {
                "debug_synchronize": "0",
              },
              "train_param": {
                "alpha": "0",
                "cache_opt": "1",
                "colsample_bylevel": "1",
                "colsample_bynode": "1",
                "colsample_bytree": "1",
                "default_direction": "learn",

                ...

                "subsample": "1"
              }
            }
          }
        },
        "learner_train_param": {
          "booster": "gbtree",
          "disable_default_eval_metric": "0",
          "objective": "reg:squarederror"
        },
        "metrics": [],
        "objective": {
          "name": "reg:squarederror",
          "reg_loss_param": {
            "scale_pos_weight": "1"
          }
        }
      },
      "version": [1, 0, 0]
    }


You can load it back to the model generated by same version of XGBoost by:

.. code-block:: python

  bst.load_config(config)

This way users can study the internal representation more closely.  Please note that some
JSON generators make use of locale dependent floating point serialization methods, which
is not supported by XGBoost.

*************************************************
Difference between saving model and dumping model
*************************************************

XGBoost has a function called ``dump_model`` in Booster object, which lets you to export
the model in a readable format like ``text``, ``json`` or ``dot`` (graphviz).  The primary
use case for it is for model interpretation or visualization, and is not supposed to be
loaded back to XGBoost.  The JSON version has a `schema
<https://github.com/dmlc/xgboost/blob/master/doc/dump.schema>`__.  See next section for
more info.

***********
JSON Schema
***********

Another important feature of JSON format is a documented `schema
<https://json-schema.org/>`__, based on which one can easily reuse the output model from
XGBoost.  Here is the JSON schema for the output model (not serialization, which will not
be stable as noted above).  For an example of parsing XGBoost tree model, see
``/demo/json-model``.  Please notice the "weight_drop" field used in "dart" booster.
XGBoost does not scale tree leaf directly, instead it saves the weights as a separated
array.

.. include:: ../model.schema
   :code: json