Toggle accesibility mode

Conference paper

Design of a Gaussian Activation Function Generator for Neural Network Applications

C.R. Popa (Polytech. Univ. Bucharest, Romania)

This paper introduces a novel method for the design of a Gaussian activation function generator. To achieve enhanced accuracy in Gaussian function generation, an original higher-order mathematical approximation is employed. A further improvement in approximation precision is obtained through the implementation of a newly proposed variable transformation technique. The highly accurate approximation function is realized in CMOS technology by means of two types of computational circuits: a squaring circuit and a multiplier/divider circuit. The simple and accurate implementation of these computational structures also contributes to the overall high precision of the proposed Gaussian activation function generator. The current-mode operation on which these CMOS computational circuits are based significantly improves the frequency response of the proposed generator and additionally enables low-voltage operation, with a supply voltage of 0.9 V. Biasing all MOS transistors at extremely low current levels ensures low-power operation, the maximum power consumption being approximately 2.5 μW. The silicon area required for the proposed Gaussian activation function generator is approximately 30 μm². The functionality of the proposed generator is validated through simulations performed using a 0.18 μm TSMC CMOS process, with SPICE simulation results confirming the theoretical analysis.

Receipt of papers:

March 15th, 2026

Notification of acceptance:

April 30th, 2026

Registration opening:

May 2nd, 2026

Final paper versions:

May 15th, 2026