Thumbnail
Access Restriction
Open

Author Parker, Alice C. ♦ Eshaghian-Wilner, Mary M. ♦ Navab, Shiva ♦ Khitun, Alex ♦ Wang, Kang L. ♦ Friesz, Aaron ♦ Zhou, Chongwu
Source CiteSeerX
Content type Text
File Format PDF
Subject Domain (in DDC) Computer science, information & general works ♦ Data processing & computer science
Subject Keyword Fan-in Fan-out Constraint ♦ Interconnected Neural Network Model ♦ New Nanoscale Spin-wave-based Architecture ♦ Neural Network ♦ Superposition Property ♦ Point-to-point Interconnection ♦ Multiple Data ♦ Nanoscale Spin-wave Architecture ♦ Hopfield Model ♦ Standard Vlsi Design
Abstract Abstract. In this paper, we propose using a new nanoscale spin-wave-based architecture for implementing neural networks. We show that this architecture can efficiently realize highly interconnected neural network models such as the Hopfield model. In our proposed architecture, no point-to-point interconnection is required, so unlike standard VLSI design, no fan-in/fan-out constraint limits the interconnectivity. Using spin-waves, each neuron could broadcast to all other neurons simultaneously and similarly a neuron could concurrently receive and process multiple data. Therefore in this architecture, the total weighted sum to each neuron can be computed by the sum of the values from all the incoming waves to that neuron. In addition, using the superposition property of waves, this computation can be done in O(1) time, and neurons can update their states quite rapidly.
Educational Role Student ♦ Teacher
Age Range above 22 year
Educational Use Research
Education Level UG and PG ♦ Career/Technical Study
Learning Resource Type Article