Hardware Implementation of the Activation Layer and Mean Pooling Layer for the CNN Digit Recognition
Keywords:
Deep Learning, CNN Digit Recognition, Hardware CNN, Pooling Layes, FPGA, Hardware AcceleratorAbstract
This work focuses on enhancing the efficiency of Convolutional Neural Networks (CNNs) for digit recognition through dedicated hardware design of the ReLU activation function and mean pooling layer. The CNN model is initially implemented in MATLAB and trained on the MNIST dataset. The hardware architecture, designed using Verilog HDL for an Intel Cyclone IV E FPGA, successfully replicates the MATLAB outputs, as verified through rigorous simulations with ModelSim. The hardware implementation demonstrates significant performance improvements, notably reducing execution time from 104,458µs in software to 8.05µs in hardware. The designed hardware exhibits a 2.60GHz frequency, 4,457 logic elements, 2,522 registers, 409,600 memory bits, and 71.73mW thermal power dissipation, showcasing superior computational efficiency.
Downloads
Published
Issue
Section
License
Copyright (c) 2024 Journal of Electronic Voltage and Application

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.







