Multisensory integration is a phenomenon by which the brain infers coherent and robust representations using incoming sensory information in different modalities. It plays a significant role in perception as well as all cognitive functions from memory to decision-making. Because of its inherent multimodal nature, it is harder to study in experiments and many open questions exist regarding its underlying neural mechanisms. In my research, I approach this problem from two different perspectives that involve computational models in conjunction with experimental data. In the first method, I focus on understanding the neurobiological mechanisms that might support multisensory integration. This approach revolves around developing deep biologically plausible methods of learning and comparing their properties with experimental data. In the second method focus shifts to identifying the underlying structures necessary for multisensory integration rather than the intermediate mechanisms that yield these structures. This direction uses recurrent neural networks trained using error-backpropagation to perform cognitive tasks.