File: sklearn-1.2.1.patch

package info (click to toggle)
q2-feature-classifier 2022.11.1-2
  • links: PTS, VCS
  • area: main
  • in suites: bookworm
  • size: 1,460 kB
  • sloc: python: 3,564; makefile: 33; sh: 13
file content (80 lines) | stat: -rw-r--r-- 3,646 bytes parent folder | download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
Description: fix test failures with sklearn 1.2.1
 This patch works around an issue with scikit-learn not being tolerant about
 ngram_range being a list instead of a tuple anymore, and the json module
 tending to forcefully convert tuples from specifications into lists.
Author: Étienne Mollier <emollier@debian.org>
Forwarded: https://github.com/qiime2/q2-feature-classifier/issues/187
Last-Update: 2023-02-02
---
This patch header follows DEP-3: http://dep.debian.net/deps/dep3/
--- q2-feature-classifier.orig/q2_feature_classifier/_skl.py
+++ q2-feature-classifier/q2_feature_classifier/_skl.py
@@ -17,7 +17,7 @@
            {'__type__': 'feature_extraction.text.HashingVectorizer',
             'analyzer': 'char_wb',
             'n_features': 8192,
-            'ngram_range': [7, 7],
+            'ngram_range': (7, 7),
             'alternate_sign': False}],
           ['classify',
            {'__type__': 'custom.LowMemoryMultinomialNB',
--- q2-feature-classifier.orig/q2_feature_classifier/tests/test_classifier.py
+++ q2-feature-classifier/q2_feature_classifier/tests/test_classifier.py
@@ -66,7 +66,7 @@
                      {'__type__': 'feature_extraction.text.HashingVectorizer',
                       'analyzer': 'char_wb',
                       'n_features': 8192,
-                      'ngram_range': [8, 8],
+                      'ngram_range': (8, 8),
                       'alternate_sign': False}],
                     ['classify',
                      {'__type__': 'naive_bayes.GaussianNB'}]]
@@ -117,7 +117,7 @@
                      {'__type__': 'feature_extraction.text.HashingVectorizer',
                       'analyzer': 'char_wb',
                       'n_features': 8192,
-                      'ngram_range': [8, 8],
+                      'ngram_range': (8, 8),
                       'alternate_sign': False}],
                     ['classify',
                      {'__type__': 'linear_model.LogisticRegression'}]]
--- q2-feature-classifier.orig/q2_feature_classifier/tests/test_custom.py
+++ q2-feature-classifier/q2_feature_classifier/tests/test_custom.py
@@ -39,7 +39,7 @@
                 {'__type__': 'feature_extraction.text.HashingVectorizer',
                  'analyzer': 'char',
                  'n_features': 8192,
-                 'ngram_range': [8, 8],
+                 'ngram_range': (8, 8),
                  'alternate_sign': False}],
                 ['classify',
                  {'__type__': 'custom.LowMemoryMultinomialNB',
@@ -68,7 +68,7 @@
 
         params = {'analyzer': 'char',
                   'n_features': 8192,
-                  'ngram_range': [8, 8],
+                  'ngram_range': (8, 8),
                   'alternate_sign': False}
         hv = HashingVectorizer(**params)
         unchunked = hv.fit_transform(X)
--- q2-feature-classifier.orig/q2_feature_classifier/classifier.py
+++ q2-feature-classifier/q2_feature_classifier/classifier.py
@@ -84,6 +84,8 @@
 
 def pipeline_from_spec(spec):
     def as_steps(obj):
+        if 'ngram_range' in obj:
+            obj['ngram_range'] = tuple(obj['ngram_range'])
         if '__type__' in obj:
             klass = _load_class(obj['__type__'])
             return klass(**{k: v for k, v in obj.items() if k != '__type__'})
@@ -323,6 +325,8 @@
                 kwargs[param] = json.loads(kwargs[param])
             except (json.JSONDecodeError, TypeError):
                 pass
+            if param == 'feat_ext__ngram_range':
+                kwargs[param] = tuple(kwargs[param])
         pipeline = pipeline_from_spec(spec)
         pipeline.set_params(**kwargs)
         if class_weight is not None: