File: model_persistence.rst

package info (click to toggle)
scikit-learn 1.4.2%2Bdfsg-8
  • links: PTS, VCS
  • area: main
  • in suites: sid, trixie
  • size: 25,036 kB
  • sloc: python: 201,105; cpp: 5,790; ansic: 854; makefile: 304; sh: 56; javascript: 20
file content (183 lines) | stat: -rw-r--r-- 7,049 bytes parent folder | download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
.. Places parent toc into the sidebar

:parenttoc: True

.. _model_persistence:

=================
Model persistence
=================

After training a scikit-learn model, it is desirable to have a way to persist
the model for future use without having to retrain. The following sections give
you some hints on how to persist a scikit-learn model.

Python specific serialization
-----------------------------

It is possible to save a model in scikit-learn by using Python's built-in
persistence model, namely `pickle
<https://docs.python.org/3/library/pickle.html>`_::

  >>> from sklearn import svm
  >>> from sklearn import datasets
  >>> clf = svm.SVC()
  >>> X, y= datasets.load_iris(return_X_y=True)
  >>> clf.fit(X, y)
  SVC()

  >>> import pickle
  >>> s = pickle.dumps(clf)
  >>> clf2 = pickle.loads(s)
  >>> clf2.predict(X[0:1])
  array([0])
  >>> y[0]
  0

In the specific case of scikit-learn, it may be better to use joblib's
replacement of pickle (``dump`` & ``load``), which is more efficient on
objects that carry large numpy arrays internally as is often the case for
fitted scikit-learn estimators, but can only pickle to the disk and not to a
string::

  >>> from joblib import dump, load
  >>> dump(clf, 'filename.joblib') # doctest: +SKIP

Later you can load back the pickled model (possibly in another Python process)
with::

  >>> clf = load('filename.joblib') # doctest:+SKIP

.. note::

   ``dump`` and ``load`` functions also accept file-like object
   instead of filenames. More information on data persistence with Joblib is
   available `here
   <https://joblib.readthedocs.io/en/latest/persistence.html>`_.

|details-start|
**InconsistentVersionWarning**
|details-split|

When an estimator is unpickled with a scikit-learn version that is inconsistent
with the version the estimator was pickled with, a
:class:`~sklearn.exceptions.InconsistentVersionWarning` is raised. This warning
can be caught to obtain the original version the estimator was pickled with::

  from sklearn.exceptions import InconsistentVersionWarning
  warnings.simplefilter("error", InconsistentVersionWarning)

  try:
      est = pickle.loads("model_from_prevision_version.pickle")
  except InconsistentVersionWarning as w:
      print(w.original_sklearn_version)

|details-end|

.. _persistence_limitations:

Security & maintainability limitations
......................................

pickle (and joblib by extension), has some issues regarding maintainability
and security. Because of this,

* Never unpickle untrusted data as it could lead to malicious code being
  executed upon loading.
* While models saved using one version of scikit-learn might load in
  other versions, this is entirely unsupported and inadvisable. It should
  also be kept in mind that operations performed on such data could give
  different and unexpected results.

In order to rebuild a similar model with future versions of scikit-learn,
additional metadata should be saved along the pickled model:

* The training data, e.g. a reference to an immutable snapshot
* The python source code used to generate the model
* The versions of scikit-learn and its dependencies
* The cross validation score obtained on the training data

This should make it possible to check that the cross-validation score is in the
same range as before.

Aside for a few exceptions, pickled models should be portable across
architectures assuming the same versions of dependencies and Python are used.
If you encounter an estimator that is not portable please open an issue on
GitHub. Pickled models are often deployed in production using containers, like
Docker, in order to freeze the environment and dependencies.

If you want to know more about these issues and explore other possible
serialization methods, please refer to this
`talk by Alex Gaynor
<https://pyvideo.org/video/2566/pickles-are-for-delis-not-software>`_.


A more secure format: `skops`
.............................

`skops <https://skops.readthedocs.io/en/stable/>`__ provides a more secure
format via the :mod:`skops.io` module. It avoids using :mod:`pickle` and only
loads files which have types and references to functions which are trusted
either by default or by the user.

|details-start|
**Using skops**
|details-split|

The API is very similar to ``pickle``, and
you can persist your models as explain in the `docs
<https://skops.readthedocs.io/en/stable/persistence.html>`__ using
:func:`skops.io.dump` and :func:`skops.io.dumps`::

    import skops.io as sio
    obj = sio.dumps(clf)

And you can load them back using :func:`skops.io.load` and
:func:`skops.io.loads`. However, you need to specify the types which are
trusted by you. You can get existing unknown types in a dumped object / file
using :func:`skops.io.get_untrusted_types`, and after checking its contents,
pass it to the load function::

    unknown_types = sio.get_untrusted_types(data=obj)
    clf = sio.loads(obj, trusted=unknown_types)

If you trust the source of the file / object, you can pass ``trusted=True``::

    clf = sio.loads(obj, trusted=True)

Please report issues and feature requests related to this format on the `skops
issue tracker <https://github.com/skops-dev/skops/issues>`__.

|details-end|

Interoperable formats
---------------------

For reproducibility and quality control needs, when different architectures
and environments should be taken into account, exporting the model in
`Open Neural Network
Exchange <https://onnx.ai/>`_ format or `Predictive Model Markup Language
(PMML) <https://dmg.org/pmml/v4-4-1/GeneralStructure.html>`_ format
might be a better approach than using `pickle` alone.
These are helpful where you may want to use your model for prediction in a
different environment from where the model was trained.

ONNX is a binary serialization of the model. It has been developed to improve
the usability of the interoperable representation of data models.
It aims to facilitate the conversion of the data
models between different machine learning frameworks, and to improve their
portability on different computing architectures. More details are available
from the `ONNX tutorial <https://onnx.ai/get-started.html>`_.
To convert scikit-learn model to ONNX a specific tool `sklearn-onnx
<http://onnx.ai/sklearn-onnx/>`_ has been developed.

PMML is an implementation of the `XML
<https://en.wikipedia.org/wiki/XML>`_ document standard
defined to represent data models together with the data used to generate them.
Being human and machine readable,
PMML is a good option for model validation on different platforms and
long term archiving. On the other hand, as XML in general, its verbosity does
not help in production when performance is critical.
To convert scikit-learn model to PMML you can use for example `sklearn2pmml
<https://github.com/jpmml/sklearn2pmml>`_ distributed under the Affero GPLv3
license.