概要

kerasで、MNISTのサンプルを実装し、動かしてみた。


背景と目的

kerasを初めて使ってみる。手始めに、MNISTの手書き文字判別のNNを実装してみる。


詳細

0.環境

  • Jupyter
  • Python3

1.インポート

import os
import keras
from keras.models import Sequential
from keras.layers import Dense

2.ネットワーク

784ニューロンの入力層、中間層は128, 64, 出力層は10の全結合NNを定義

model = Sequential()

model.add(Dense(units=128, activation='relu', input_dim=784))
model.add(Dense(units=64, activation='relu', input_dim=128))
model.add(Dense(units=10, activation='softmax', input_dim=64))

model.compile(loss=keras.losses.categorical_crossentropy,
              optimizer=keras.optimizers.SGD(lr=0.01, momentum=0.9, nesterov=True),
             metrics=['accuracy'])

3.データの取得

from keras.datasets import mnist
(xtrain, y_train), (x_test, y_test) = mnist.load_data()

4.学習

batch_size = 100
epochs = 10 # 終了エポック
initial_epoch = 0 # 開始エポック
x_train = x_train.reshape(60000, 784) # 2次元配列を1次元に変換
x_test = x_test.reshape(10000, 784)
x_train = x_train.astype('float32')   # int型をfloat32型に変換
x_test = x_test.astype('float32')
x_train /= 255                        # [0-255]の値を[0.0-1.0]に変換
x_test /= 255
y_train = keras.utils.to_categorical(y_train, 10)
y_test = keras.utils.to_categorical(y_test, 10)

from keras.callbacks import ModelCheckpoint
fpath = 'weights.{epoch:04d}.hdf5'
cb = ModelCheckpoint(
    fpath, # 保存先ファイルパス
    verbose=1, # コマンドライン出力
    savebestonly=True) # 精度が良くなった時だけ保存

weighthdf5 = "weights." + str(initialepoch).zfill(4) + '.hdf5'
if os.path.exists(weighthdf5):
    model.loadweights(weighthdf5)
    print(weighthdf5 + " loaded.")
else:
    print(weight_hdf5 + " not found.")

model.fit(xtrain, ytrain,
          epochs=epochs,
          initialepoch=initialepoch,
          batchsize=batchsize,
          validationdata=(xtest, y_test),
          callbacks=[cb, tsb])

classes = model.predict(xtest, batchsize=batch_size)

score = model.evaluate(xtest, ytest, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])

5.結果

10エポックだけやってみた。ちゃんと学習できていそう。

weights.0000.hdf5 not found.
Train on 60000 samples, validate on 10000 samples
Epoch 1/10
60000/60000 [==============================] - 2s 35us/step - loss: 0.4187 - acc: 0.8800 - valloss: 0.2162 - valacc: 0.9380

Epoch 00001: valloss improved from inf to 0.21623, saving model to weights.0001.hdf5
Epoch 2/10
60000/60000 [==============================] - ETA: 0s - loss: 0.1933 - acc: 0.944 - 2s 35us/step - loss: 0.1931 - acc: 0.9440 - valloss: 0.1542 - val_acc: 0.9546

Epoch 00002: valloss improved from 0.21623 to 0.15421, saving model to weights.0002.hdf5
Epoch 3/10
60000/60000 [==============================] - 2s 33us/step - loss: 0.1426 - acc: 0.9578 - valloss: 0.1216 - val_acc: 0.9628

Epoch 00003: valloss improved from 0.15421 to 0.12159, saving model to weights.0003.hdf5
Epoch 4/10
60000/60000 [==============================] - 2s 26us/step - loss: 0.1128 - acc: 0.9667 - valloss: 0.1126 - val_acc: 0.9657

Epoch 00004: valloss improved from 0.12159 to 0.11258, saving model to weights.0004.hdf5
Epoch 5/10
60000/60000 [==============================] - ETA: 0s - loss: 0.0939 - acc: 0.972 - 2s 30us/step - loss: 0.0938 - acc: 0.9725 - valloss: 0.0992 - val_acc: 0.9697

Epoch 00005: valloss improved from 0.11258 to 0.09921, saving model to weights.0005.hdf5
Epoch 6/10
60000/60000 [==============================] - 2s 34us/step - loss: 0.0794 - acc: 0.9771 - valloss: 0.0882 - val_acc: 0.9737

Epoch 00006: valloss improved from 0.09921 to 0.08821, saving model to weights.0006.hdf5
Epoch 7/10
60000/60000 [==============================] - 2s 33us/step - loss: 0.0684 - acc: 0.9799 - valloss: 0.0880 - val_acc: 0.9726

Epoch 00007: valloss improved from 0.08821 to 0.08802, saving model to weights.0007.hdf5
Epoch 8/10
60000/60000 [==============================] - 2s 31us/step - loss: 0.0595 - acc: 0.9827 - valloss: 0.0803 - val_acc: 0.9741

Epoch 00008: valloss improved from 0.08802 to 0.08034, saving model to weights.0008.hdf5
Epoch 9/10
60000/60000 [==============================] - 2s 33us/step - loss: 0.0530 - acc: 0.9844 - valloss: 0.0779 - val_acc: 0.9751

Epoch 00009: valloss improved from 0.08034 to 0.07787, saving model to weights.0009.hdf5
Epoch 10/10
60000/60000 [==============================] - 2s 30us/step - loss: 0.0461 - acc: 0.9867 - valloss: 0.0826 - val_acc: 0.9749

Epoch 00010: val_loss did not improve from 0.07787
Test loss: 0.08257695595924743
Test accuracy: 0.9749

まとめ

kerasで、MNISTのサンプルを実装し、動かしてみた。