CNN model structure details
Layer | Input | Ouput | Filter size |
---|---|---|---|
Conv2D (ReLU) | 28×28×27 | 25×25×256 | 4×4 |
MaxPolling2D | 25×25×256 | 12×12×256 | 2×2 |
Conv2D (ReLU) | 12×12×256 | 10×10×256 | 3×3 |
MaxPolling2D | 10×10×256 | 5×5×256 | 2×2 |
Conv2D (ReLU) | 5×5×256 | 4×4×256 | 2×2 |
MaxPolling2D | 4×4×256 | 2×2×256 | 2×2 |
Flatten | 2×2×256 | 1,024 | |
Dense (ReLU) | 1,024 | 128 | 1 |
Dense (ReLU) | 128 | 128 | 1 |
Dense (ReLU) | 128 | 128 | 1 |
Dense (ReLU) | 128 | 64 | 1 |
Dense (ReLU) | 64 | 64 | 1 |
Dense (ReLU) | 64 | 64 | 1 |
Dense (sigmoid) | 64 | 64 | 1 |
Dense (softmax) | 64 | 2 | 1 |
The type, shape of input and output, and filter size of each layer are indicated. ReLU was used as the activation function for convolutional layers and most of the dense layers. Sigmoid and softmax functions were applied for the last two dense layers, respectively.