1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209
|
# Datasets
## CIFAR10 small image classification
Dataset of 50,000 32x32 color training images, labeled over 10 categories, and 10,000 test images.
### Usage:
```python
from keras.datasets import cifar10
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
```
- __Returns:__
- 2 tuples:
- __x_train, x_test__: uint8 array of RGB image data with shape (num_samples, 3, 32, 32) or (num_samples, 32, 32, 3) based on the `image_data_format` backend setting of either `channels_first` or `channels_last` respectively.
- __y_train, y_test__: uint8 array of category labels (integers in range 0-9) with shape (num_samples, 1).
---
## CIFAR100 small image classification
Dataset of 50,000 32x32 color training images, labeled over 100 categories, and 10,000 test images.
### Usage:
```python
from keras.datasets import cifar100
(x_train, y_train), (x_test, y_test) = cifar100.load_data(label_mode='fine')
```
- __Returns:__
- 2 tuples:
- __x_train, x_test__: uint8 array of RGB image data with shape (num_samples, 3, 32, 32) or (num_samples, 32, 32, 3) based on the `image_data_format` backend setting of either `channels_first` or `channels_last` respectively.
- __y_train, y_test__: uint8 array of category labels with shape (num_samples, 1).
- __Arguments:__
- __label_mode__: "fine" or "coarse".
---
## IMDB Movie reviews sentiment classification
Dataset of 25,000 movies reviews from IMDB, labeled by sentiment (positive/negative). Reviews have been preprocessed, and each review is encoded as a [sequence](preprocessing/sequence.md) of word indexes (integers). For convenience, words are indexed by overall frequency in the dataset, so that for instance the integer "3" encodes the 3rd most frequent word in the data. This allows for quick filtering operations such as: "only consider the top 10,000 most common words, but eliminate the top 20 most common words".
As a convention, "0" does not stand for a specific word, but instead is used to encode any unknown word.
### Usage:
```python
from keras.datasets import imdb
(x_train, y_train), (x_test, y_test) = imdb.load_data(path="imdb.npz",
num_words=None,
skip_top=0,
maxlen=None,
seed=113,
start_char=1,
oov_char=2,
index_from=3)
```
- __Returns:__
- 2 tuples:
- __x_train, x_test__: list of sequences, which are lists of indexes (integers). If the num_words argument was specific, the maximum possible index value is num_words-1. If the maxlen argument was specified, the largest possible sequence length is maxlen.
- __y_train, y_test__: list of integer labels (1 or 0).
- __Arguments:__
- __path__: if you do not have the data locally (at `'~/.keras/datasets/' + path`), it will be downloaded to this location.
- __num_words__: integer or None. Top most frequent words to consider. Any less frequent word will appear as `oov_char` value in the sequence data.
- __skip_top__: integer. Top most frequent words to ignore (they will appear as `oov_char` value in the sequence data).
- __maxlen__: int. Maximum sequence length. Any longer sequence will be truncated.
- __seed__: int. Seed for reproducible data shuffling.
- __start_char__: int. The start of a sequence will be marked with this character.
Set to 1 because 0 is usually the padding character.
- __oov_char__: int. words that were cut out because of the `num_words`
or `skip_top` limit will be replaced with this character.
- __index_from__: int. Index actual words with this index and higher.
---
## Reuters newswire topics classification
Dataset of 11,228 newswires from Reuters, labeled over 46 topics. As with the IMDB dataset, each wire is encoded as a sequence of word indexes (same conventions).
### Usage:
```python
from keras.datasets import reuters
(x_train, y_train), (x_test, y_test) = reuters.load_data(path="reuters.npz",
num_words=None,
skip_top=0,
maxlen=None,
test_split=0.2,
seed=113,
start_char=1,
oov_char=2,
index_from=3)
```
The specifications are the same as that of the IMDB dataset, with the addition of:
- __test_split__: float. Fraction of the dataset to be used as test data.
This dataset also makes available the word index used for encoding the sequences:
```python
word_index = reuters.get_word_index(path="reuters_word_index.json")
```
- __Returns:__ A dictionary where key are words (str) and values are indexes (integer). eg. `word_index["giraffe"]` might return `1234`.
- __Arguments:__
- __path__: if you do not have the index file locally (at `'~/.keras/datasets/' + path`), it will be downloaded to this location.
---
## MNIST database of handwritten digits
Dataset of 60,000 28x28 grayscale images of the 10 digits, along with a test set of 10,000 images.
### Usage:
```python
from keras.datasets import mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
```
- __Returns:__
- 2 tuples:
- __x_train, x_test__: uint8 array of grayscale image data with shape (num_samples, 28, 28).
- __y_train, y_test__: uint8 array of digit labels (integers in range 0-9) with shape (num_samples,).
- __Arguments:__
- __path__: if you do not have the index file locally (at `'~/.keras/datasets/' + path`), it will be downloaded to this location.
---
## Fashion-MNIST database of fashion articles
Dataset of 60,000 28x28 grayscale images of 10 fashion categories, along with a test set of 10,000 images. This dataset can be used as a drop-in replacement for MNIST. The class labels are:
| Label | Description |
| --- | --- |
| 0 | T-shirt/top |
| 1 | Trouser |
| 2 | Pullover |
| 3 | Dress |
| 4 | Coat |
| 5 | Sandal |
| 6 | Shirt |
| 7 | Sneaker |
| 8 | Bag |
| 9 | Ankle boot |
### Usage:
```python
from keras.datasets import fashion_mnist
(x_train, y_train), (x_test, y_test) = fashion_mnist.load_data()
```
- __Returns:__
- 2 tuples:
- __x_train, x_test__: uint8 array of grayscale image data with shape (num_samples, 28, 28).
- __y_train, y_test__: uint8 array of labels (integers in range 0-9) with shape (num_samples,).
---
## Boston housing price regression dataset
Dataset taken from the StatLib library which is maintained at Carnegie Mellon University.
Samples contain 13 attributes of houses at different locations around the Boston suburbs in the late 1970s.
Targets are the median values of the houses at a location (in k$).
### Usage:
```python
from keras.datasets import boston_housing
(x_train, y_train), (x_test, y_test) = boston_housing.load_data()
```
- __Arguments:__
- __path__: path where to cache the dataset locally
(relative to ~/.keras/datasets).
- __seed__: Random seed for shuffling the data
before computing the test split.
- __test_split__: fraction of the data to reserve as test set.
- __Returns:__
Tuple of Numpy arrays: `(x_train, y_train), (x_test, y_test)`.
|