This page is organized in 5 sections. You can select the desired section or wait for the first one. Sections are.
Supervised artificial recurrent neural networks (RNNs) have adaptive feedback connections that allow them to learn mappings from input sequences to output sequences. In principle, they can implement almost arbitrary sequential, algorithmic behavior. They are biologically more plausible and computationally more powerful than other adaptive models such as Hidden Markov Models and Conditional Markov Random Fields (no continuous internal states), feedforward neural networks and traditional Support Vector Machines (SVMs - no internal states at all). Goal of this project is to further improve and analyze our state-of-the-art algorithms for supervised RNNs, especially bi-directional RNNs, and apply them to challenging sequence learning tasks such as speech processing, sun spot prediction, protein prediction. One focus is on novel types of recurrent SVMs, another on RNNs with information-theoretic objectives and on combinations with statistical approaches.
Your are browsing the content Project description of the topic People directory. You arrived from.