Skip to main content

Multi-Neural Networks hardware and software architecture: Application of the divide to simplify paradigm DTS

  • Neural Nets Simulation, Emulation and Implementation
  • Conference paper
  • First Online:
Biological and Artificial Computation: From Neuroscience to Technology (IWANN 1997)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 1240))

Included in the following conference series:

  • 106 Accesses

  • 4 Citations

Abstract

We present in this paper the implementation of the data driven method we called DTS (Divide to Simplify), that builds dynamically a Multi-Neural Network Architecture. The Multi-Neural Network architecture, we propose, solves a complex problem by splitting it into several easier problems. We have previously present a software version of the DTS multi-neural network architecture. The main idea of the DTS approach is to use a set of small and specialized mapping neural networks, or Slave Neural Networks (SNN), that are guided by a prototype based neural network, or Master Neural Network (MNN). In this paper, the MNN manages a set of hardware digital neural networks. Learning is performed in few milliseconds. We get a very good rate of classification when using the two spirals problem as a benchmark.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. S. Goonatilake and S. Khebbal, « Intelligent Hybrid Systems: Issues, Classification and Future Directions », In « Intelligent Hybrid Systems » John Wiley & Sons, pp 1–20, ISBN 0 471 94242 1

    Google Scholar 

  2. A. Krogh, J. Vedelsby, « Neural Network Ensembles, Cross Validation, and Active Learning », Advances in Neural Information Processing Systems 7, The MIT Press, Ed by G. Tesauro, pp 231–238.

    Google Scholar 

  3. ZISC036 data book, IBM Essonnes Component Development Laboratory, IBM Microelectronics, Corbeil-Essonnes, France.

    Google Scholar 

  4. K. J. Lang and M. J. Witbrock, « Learning to tell two spirals apart », Proc. of the 1988 Connectionist Models Summer School, Morgan Kauffman, pp 52–59.

    Google Scholar 

  5. S. E. Fahlman, C. Lebiere, « The Cascaded-Correlation Learning Architecture », Advances in Neural Information Processing Systems 2, Morgan Kauffman, San Mateo, pp 524–534.

    Google Scholar 

  6. T. R. Shultz, Y. O. Takane and Y. Takana, « Analysis of Unstandardized Contributions of Cross Connected Networks », Advances in Neural Information Processing Systems 7, The MIT Press, Ed by G. Tesauro, pp 610–608.

    Google Scholar 

  7. J. Hun, C. Moraga, « The influence of Sigmoid Function Parameters on the speed of Backpropagation Learning», International Workshop on Artificial Neural Network, Malaga-Torremolinos, Spain, June 1995, Springer, pp 195–201.

    Google Scholar 

  8. B. Fritzke, « Growing Cell Structure, A self organizing network for unsupervised and supervised training », ICSI Berkeley, Technical Report, tr-93-026.

    Google Scholar 

  9. J. Bruske, G. Sommer, «Dynamic Cell Structure», Advances in Neural Information Processing Systems 7, The MIT Press, Ed by G. Tesauro, pp 497–504.

    Google Scholar 

  10. A Hannibal « VLSI Building Block for Neural Networks with on chip Back Learning », Neurocoputing n∘5,1993, pp 25–37.

    Google Scholar 

  11. K. K. Sang and P. Niyogi, « Active learning for function approximation », Advances in Neural Information Processing Systems 7, The MIT Press, Ed by G. Tesauro, pp 593–600.

    Google Scholar 

  12. A. Chebira, G. Mercier, K.Madani, G. de Tremiolles « A prototype based neural network supervising a set of feature based networks: application to the two spirals problem», Fourth European Congress on Intelligent Techniques and Soft Computing, Aachen, Germany, September 2–5, 1996, volume 1, pp 303–307.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

José Mira Roberto Moreno-Díaz Joan Cabestany

Rights and permissions

Reprints and permissions

Copyright information

© 1997 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Chebira, A., Madani, K., Mercier, G. (1997). Multi-Neural Networks hardware and software architecture: Application of the divide to simplify paradigm DTS. In: Mira, J., Moreno-Díaz, R., Cabestany, J. (eds) Biological and Artificial Computation: From Neuroscience to Technology. IWANN 1997. Lecture Notes in Computer Science, vol 1240. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0032544

Download citation

  • DOI: https://doi.org/10.1007/BFb0032544

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-63047-0

  • Online ISBN: 978-3-540-69074-0

  • eBook Packages: Springer Book Archive

Key words

Publish with us

Policies and ethics