CIFAR-10 데이터 세트는 10개 클래스에 60000 32x32 컬러 이미지로 구성되며 클래스당 6000개의 이미지가 있다. 50000개의 교육 이미지와 10000개의 테스트 이미지가 있습니다.
데이터 세트는 각각 10000개의 이미지가 있는 5개의 교육 배치와 1개의 테스트 배치로 나뉜다. 테스트 배치에는 각 클래스에서 무작위로 선택한 1000개의 이미지가 들어 있습니다. 교육 배치에는 나머지 이미지가 임의의 순서로 포함되지만 일부 교육 배치에는 한 클래스의 이미지가 다른 클래스보다 더 많이 포함될 수 있습니다. 이 중 교육 배치에는 각 클래스의 5,000개 이미지가 정확히 포함됩니다.
import numpy as np
from tensorflow.keras.datasets import cifar10
(x_train,y_train),(x_test,y_test) = cifar10.load_data()
print(x_train.shape,x_test.shape)
print(y_train.shape,y_test.shape)
(50000, 32, 32, 3) (10000, 32, 32, 3) (50000, 1) (10000, 1)
import matplotlib.pyplot as plt
plt.figure(figsize=(5,5))
for i in range(1,16):
plt.subplot(3,5,i)
plt.imshow(x_train[i])
x_train = x_train/255.
x_test = x_test/255.
from tensorflow.keras.utils import to_categorical
y_train = to_categorical(y_train)
y_test = to_categorical(y_test)
from tensorflow.keras.layers import Conv2D,BatchNormalization,Flatten,Dense
from tensorflow.keras.models import Sequential
model = Sequential()
model.add(Conv2D(32,(3,3),activation='relu',input_shape=(32,32,3)))
model.add(BatchNormalization())
model.add(Conv2D(64,(3,3),activation='relu'))
model.add(BatchNormalization())
model.add(Conv2D(64,(3,3),activation='relu',strides=2))
model.add(BatchNormalization())
model.add(Conv2D(32,(3,3),activation='relu',))
model.add(BatchNormalization())
model.add(Conv2D(64,(3,3),activation='relu'))
model.add(BatchNormalization())
model.add(Conv2D(64,(3,3),activation='relu',strides=2))
model.add(BatchNormalization())
model.add(Flatten())
model.add(Dense(32,activation='relu'))
model.add(Dense(10,activation='softmax'))
from tensorflow.keras.callbacks import EarlyStopping,ModelCheckpoint,ReduceLROnPlateau
early_stopping = EarlyStopping(monitor="val_loss",patience=16)
reducelr = ReduceLROnPlateau(monitor = "val_loss",patience = 5)
model.compile(loss="categorical_crossentropy",optimizer="adam",metrics=["accuracy"])
history = model.fit(x_train,y_train,epochs =100 , validation_data=(x_test,y_test),batch_size = 32 , callbacks = [early_stopping,reducelr])
Epoch 1/100 1563/1563 [==============================] - 13s 6ms/step - loss: 1.4881 - accuracy: 0.4714 - val_loss: 1.2884 - val_accuracy: 0.5460 Epoch 2/100 1563/1563 [==============================] - 9s 6ms/step - loss: 1.0333 - accuracy: 0.6336 - val_loss: 1.1228 - val_accuracy: 0.5991 Epoch 3/100 1563/1563 [==============================] - 10s 6ms/step - loss: 0.8313 - accuracy: 0.7083 - val_loss: 0.9835 - val_accuracy: 0.6657 Epoch 4/100 1563/1563 [==============================] - 10s 6ms/step - loss: 0.7144 - accuracy: 0.7478 - val_loss: 0.8142 - val_accuracy: 0.7186 Epoch 5/100 1563/1563 [==============================] - 10s 6ms/step - loss: 0.6306 - accuracy: 0.7800 - val_loss: 0.7757 - val_accuracy: 0.7348 Epoch 6/100 1563/1563 [==============================] - 10s 6ms/step - loss: 0.5576 - accuracy: 0.8041 - val_loss: 0.7951 - val_accuracy: 0.7365 Epoch 7/100 1563/1563 [==============================] - 10s 6ms/step - loss: 0.5014 - accuracy: 0.8249 - val_loss: 0.7485 - val_accuracy: 0.7559 Epoch 8/100 1563/1563 [==============================] - 10s 6ms/step - loss: 0.4508 - accuracy: 0.8416 - val_loss: 0.8120 - val_accuracy: 0.7379 Epoch 9/100 1563/1563 [==============================] - 10s 6ms/step - loss: 0.3957 - accuracy: 0.8613 - val_loss: 0.7701 - val_accuracy: 0.7571 Epoch 10/100 1563/1563 [==============================] - 10s 6ms/step - loss: 0.3539 - accuracy: 0.8750 - val_loss: 0.9725 - val_accuracy: 0.7157 Epoch 11/100 1563/1563 [==============================] - 10s 6ms/step - loss: 0.3165 - accuracy: 0.8899 - val_loss: 1.0108 - val_accuracy: 0.7235 Epoch 12/100 1563/1563 [==============================] - 9s 6ms/step - loss: 0.2892 - accuracy: 0.8970 - val_loss: 0.8585 - val_accuracy: 0.7571 Epoch 13/100 1563/1563 [==============================] - 10s 6ms/step - loss: 0.1628 - accuracy: 0.9464 - val_loss: 0.7686 - val_accuracy: 0.7864 Epoch 14/100 1563/1563 [==============================] - 10s 6ms/step - loss: 0.1214 - accuracy: 0.9619 - val_loss: 0.7907 - val_accuracy: 0.7857 Epoch 15/100 1563/1563 [==============================] - 10s 6ms/step - loss: 0.1026 - accuracy: 0.9689 - val_loss: 0.8164 - val_accuracy: 0.7832 Epoch 16/100 1563/1563 [==============================] - 10s 6ms/step - loss: 0.0869 - accuracy: 0.9751 - val_loss: 0.8483 - val_accuracy: 0.7852 Epoch 17/100 1563/1563 [==============================] - 10s 6ms/step - loss: 0.0775 - accuracy: 0.9783 - val_loss: 0.8746 - val_accuracy: 0.7825 Epoch 18/100 1563/1563 [==============================] - 10s 6ms/step - loss: 0.0642 - accuracy: 0.9838 - val_loss: 0.8789 - val_accuracy: 0.7821 Epoch 19/100 1563/1563 [==============================] - 10s 6ms/step - loss: 0.0599 - accuracy: 0.9849 - val_loss: 0.8827 - val_accuracy: 0.7835 Epoch 20/100 1563/1563 [==============================] - 10s 6ms/step - loss: 0.0592 - accuracy: 0.9853 - val_loss: 0.8845 - val_accuracy: 0.7846 Epoch 21/100 1563/1563 [==============================] - 10s 6ms/step - loss: 0.0577 - accuracy: 0.9863 - val_loss: 0.8875 - val_accuracy: 0.7842 Epoch 22/100 1563/1563 [==============================] - 10s 6ms/step - loss: 0.0560 - accuracy: 0.9863 - val_loss: 0.8877 - val_accuracy: 0.7842 Epoch 23/100 1563/1563 [==============================] - 10s 6ms/step - loss: 0.0552 - accuracy: 0.9868 - val_loss: 0.8897 - val_accuracy: 0.7843
loss = history.history["loss"]
acc = history.history["accuracy"]
val_loss = history.history["val_loss"]
val_acc = history.history["val_accuracy"]
plt.plot(range(len(loss)),loss,label = "Loss")
plt.plot(range(len(val_loss)),val_loss,label = "val_Loss")
plt.grid()
plt.legend()
plt.show()
plt.plot(range(len(acc)),acc,label = "acc")
plt.plot(range(len(val_acc)),val_acc,label = "val_acc")
plt.grid()
plt.legend()
plt.show()
y_predict = model.predict(x_test)
y_predict = np.argmax(y_predict,axis=-1)
y_test = np.argmax(y_test,axis=-1)
for i in range(0,10000,200):
plt.imshow(x_test[i])
plt.show()
print("y_predict : {} \t y_true : {}".format(y_predict[i],y_test[i]))
y_predict : 3 y_true : 3
y_predict : 5 y_true : 5
y_predict : 9 y_true : 9
y_predict : 8 y_true : 8
y_predict : 7 y_true : 7
y_predict : 5 y_true : 5
y_predict : 8 y_true : 8
y_predict : 5 y_true : 5
y_predict : 8 y_true : 8
y_predict : 0 y_true : 4
y_predict : 6 y_true : 1
y_predict : 0 y_true : 0
y_predict : 0 y_true : 0
y_predict : 8 y_true : 8
y_predict : 4 y_true : 4
y_predict : 3 y_true : 5
y_predict : 5 y_true : 5
y_predict : 1 y_true : 6
y_predict : 3 y_true : 4
y_predict : 9 y_true : 9
y_predict : 0 y_true : 8
y_predict : 4 y_true : 4
y_predict : 3 y_true : 3
y_predict : 4 y_true : 4
y_predict : 9 y_true : 9
y_predict : 7 y_true : 7
y_predict : 3 y_true : 3
y_predict : 9 y_true : 9
y_predict : 6 y_true : 6
y_predict : 4 y_true : 2
y_predict : 8 y_true : 8
y_predict : 8 y_true : 3
y_predict : 0 y_true : 0
y_predict : 7 y_true : 7
y_predict : 6 y_true : 6
y_predict : 0 y_true : 2
y_predict : 4 y_true : 4
y_predict : 3 y_true : 3
y_predict : 1 y_true : 8
y_predict : 0 y_true : 0
y_predict : 9 y_true : 9
y_predict : 3 y_true : 3
y_predict : 0 y_true : 0
y_predict : 3 y_true : 3
y_predict : 0 y_true : 0
y_predict : 8 y_true : 8
y_predict : 8 y_true : 8
y_predict : 6 y_true : 6
y_predict : 8 y_true : 8
y_predict : 1 y_true : 1
Keras Callbacks (Model_CheckPoint,Early_Stopping,ReduceLROnPlateau) (0) | 2022.01.01 |
---|---|
Convolution Neural Network(CNN) (0) | 2021.12.03 |
Keras FCN Network (0) | 2021.11.28 |
Keras MNIST 예제코드 (0) | 2021.11.20 |
댓글 영역