Deep Neural Networks

Definition:

Deep neural networks (DNNs) are a class of artificial neural networks that consist of several layers. They are designed to learn complex patterns and relationships in large amounts of data. DNNs are often used in applications such as image and speech recognition, machine learning and natural language processing.

Functionality:

  1. Architecture: DNNs consist of input, hidden and output layers. The hidden layers allow the network to learn non-linear relationships by processing the data through various transformations (LeCun, Bengio, & Haffner, 1998).
  2. Training: DNNs are trained by backpropagation, a process that minimises the error between the predicted and actual outputs. This is done by adjusting the weights in the connections between the neurons (Rumelhart, Hinton, & Williams, 1986).
  3. Applications: DNNs have enabled significant advances in several areas, including:
    Image classification (Krizhevsky, Sutskever, & Hinton, 2012)
    • Speech recognition (Hinton et al., 2012)
    • Generative models (Goodfellow et al., 2014)

Literature:

Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., … & Bengio, Y. (2014). Generative adversarial nets. Advances in Neural Information Processing Systems, 27, 2672-2680. Retrieved from https://arxiv.org/abs/1406.2661

Hinton, G. E., Deng, L., Yu, D., Dahl, G. E., & Mohamed, A. R. (2012). Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine, 29(6), 82-97. https://doi.org/10.1109/MSP.2012.2205597

Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). ImageNet classification with deep convolutional neural networks. Advances in Neural Information Processing Systems, 25, 1097-1105. Retrieved from https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks

LeCun, Y., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278-2324. https://doi.org/10.1109/5.726791

Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1986). Learning representations by back-propagating errors. Nature, 323(6088), 533-536. https://doi.org/10.1038/323533a0

Diese Seiten sind kopiergeschützt. Für Reproduktionsanfragen kontaktieren Sie bitte den Autor.