[chap 8-2] 합성곱 신경망을 사용한 이미지 분류
1️⃣ 데이터 불러오기
from tensorflow import keras
from sklearn.model_selection import train_test_split
(train_input, train_target), (test_input, test_target) = keras.datasets.fashion_mnist.load_data()
train_scaled = train_input.reshape(-1, 28, 28, 1) / 255.0
train_scaled, val_scaled, train_target, val_target = train_test_split(train_scaled, train_target, test_size=0.2, random_state=42)
2️⃣ 합성곱 신경망 만들기
model = keras.Sequential()
# 합성곱 층 32개 만들기
model.add(keras.layers.Conv2D(32, kernel_size=3, activation='relu', padding='same', input_shape=(28, 28, 1)))
# 풀링
model.add(keras.layers.MaxPooling2D(2))
# 합성곱 층 64개 만들기
model.add(keras.layers.Conv2D(64, kernel_size=3, activation='relu', padding='same', input_shape=(28, 28, 1)))
# 풀링
model.add(keras.layers.MaxPooling2D(2))
# 은닉층, 출력층 구성
model.add(keras.layers.Flatten())
model.add(keras.layers.Dense(100, activation='relu'))
model.add(keras.layers.Dropout(0.4))
model.add(keras.layers.Dense(10, activation='softmax'))
model.summary()
>>>
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 28, 28, 32) 320
max_pooling2d (MaxPooling2D (None, 14, 14, 32) 0
)
conv2d_1 (Conv2D) (None, 14, 14, 64) 18496
max_pooling2d_1 (MaxPooling (None, 7, 7, 64) 0
2D)
flatten (Flatten) (None, 3136) 0
dense (Dense) (None, 100) 313700
dropout (Dropout) (None, 100) 0
dense_1 (Dense) (None, 10) 1010
=================================================================
Total params: 333,526
Trainable params: 333,526
Non-trainable params: 0
_________________________________________________________________
모델 구성 보기
keras.utils.plot_model(model, show_shapes=True)
첫 합성곱으로 (28, 28, 32) 특성 맵 생성
🔽
(2, 2) 풀링 -> (14, 14, 32)
🔽
필터 64개사용 -> (14, 14, 64) 특성 맵 생성
🔽
(2, 2) 풀링 -> (7, 7, 64)
3️⃣ 모델 컴파일과 훈련
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics='accuracy')
checkpoint_cb = keras.callbacks.ModelCheckpoint('best-cnn-model.h5', save_best_only=True)
early_stopping_cb = keras.callbacks.EarlyStopping(patience=2, restore_best_weights=True)
history = model.fit(train_scaled, train_target, epochs=20, validation_data = (val_scaled, val_target), callbacks=[checkpoint_cb, early_stopping_cb])
>>>
Epoch 1/20
1500/1500 [==============================] - 66s 43ms/step - loss: 0.5237 - accuracy: 0.8120 - val_loss: 0.3200 - val_accuracy: 0.8809
Epoch 2/20
1500/1500 [==============================] - 62s 42ms/step - loss: 0.3450 - accuracy: 0.8777 - val_loss: 0.2833 - val_accuracy: 0.8988
Epoch 3/20
1500/1500 [==============================] - 62s 41ms/step - loss: 0.2965 - accuracy: 0.8932 - val_loss: 0.2616 - val_accuracy: 0.9028
Epoch 4/20
1500/1500 [==============================] - 62s 41ms/step - loss: 0.2617 - accuracy: 0.9047 - val_loss: 0.2388 - val_accuracy: 0.9106
Epoch 5/20
1500/1500 [==============================] - 61s 41ms/step - loss: 0.2389 - accuracy: 0.9134 - val_loss: 0.2262 - val_accuracy: 0.9137
Epoch 6/20
1500/1500 [==============================] - 61s 41ms/step - loss: 0.2181 - accuracy: 0.9207 - val_loss: 0.2157 - val_accuracy: 0.9202
Epoch 7/20
1500/1500 [==============================] - 61s 41ms/step - loss: 0.1985 - accuracy: 0.9268 - val_loss: 0.2172 - val_accuracy: 0.9171
Epoch 8/20
1500/1500 [==============================] - 89s 59ms/step - loss: 0.1844 - accuracy: 0.9316 - val_loss: 0.2278 - val_accuracy: 0.9180
Adam 옵티마이저를 사용, ModelCheckpoint 콜백과 EarlyStopping 콜백 사용 -> 8번째에서 종료된 것을 확인할 수 있음
그래서 성능 평가를 해보면
model.evaluate(val_scaled, val_target)
375/375 [==============================] - 7s 19ms/step - loss: 0.2157 - accuracy: 0.9202
성능도 9번째 에포크랑 비슷한 것을 확인할 수 있다.
예측 확률 출력
preds = model.predict(val_scaled[0:1])
print(preds)
[[5.7871392e-15 1.3805922e-17 4.2262581e-18 2.6966329e-16 1.4632787e-16
1.2062244e-17 3.3623695e-16 3.4051947e-18 1.0000000e+00 2.8774577e-16]]
9번째 값이 1에 근접 -> 이게 어떤 값의 레이블인지 확인하기 위해서
classes = ['티셔츠', '바지', '스웨터', '드레스', '코트', '샌달', '셔츠', '스니커즈', '가방', '앵클 부츠']
import numpy as np
print(classes[np.argmax(preds)]) >> 가방
임을 알 수 있다.
댓글남기기