When you listen Ramayana, your mind has to put efforts in imagining the visuals. If you keep repeat listening, it will refine the visual perceptions. As you grow and refine your perceptions, you also get benefitted by invisible therapeutic healing of body and mind. You are involved. You are active.
The next best method is to engage mind with Ramayana is, live performance. In that case, there is still a scope of imagination and refinement as the scenes are not perfect.
The worse option is modern Television and movies. Ready-made scenes. Detailed scenes. Invisible memes. Conditioning for petty selfish motives. And you passively gallop all visual perceptions. As per this research (Concentrating attention on a visual task can render you momentarily ‘deaf’ to sounds at normal levels ), you remain deaf (and dumb too) while you are engrossed in rich and detailed visual scenes. This eliminates auditory inspiration and stimuli to mind. Major drawback – TV watcher misses valuable चित्त शुध्धि (purification of mental imprints due to karma/actions). He or she also miss opportunity of self-realization as there is no 1 to 1 relation between the narrator (who is often considered self-realized and matured person) and the listener which is natural in traditional Ramayana recitation/narration.
Why focusing on a visual task will make us deaf to our surroundings
Concentrating attention on a visual task can render you momentarily ‘deaf’ to sounds at normal levels, reports a new UCL study funded by the Wellcome Trust
Concentrating attention on a visual task can render you momentarily ‘deaf’ to sounds at normal levels, reports a new UCL study funded by the Wellcome Trust.
The study, published in the Journal of Neuroscience, suggests that the senses of hearing and vision share a limited neural resource. Brain scans from 13 volunteers found that when they were engaged in a demanding visual task, the brain response to sound was significantly reduced. Examination of people’s ability to detect sounds during the visual demanding task also showed a higher rate of failures to detect sounds, even though the sounds were clearly audible and people did detect them when the visual task was easy.
“This was an experimental lab study which is one of the ways that we can establish cause and effect. We found that when volunteers were performing the demanding visual task, they were unable to hear sounds that they would normally hear,” explains study co-author Dr Maria Chait (UCL Ear Institute). “The brain scans showed that people were not only ignoring or filtering out the sounds, they were not actually hearing them in the first place.”
The phenomenon of ‘inattentional deafness’, where we fail to notice sounds when concentrating on other things, has been observed by the researchers before. However, this is the first time that they have been able to determine, by measuring brain activity in real-time using MEG (magnetoencephalography), that the effects are driven by brain mechanisms at a very early stage of auditory processing which would be expected to lead to the experience of being ‘deaf’ to these sounds.
Inattentional Deafness: Visual Load Leads to Time-Specific Suppression of Auditory Evoked Responses
Due to capacity limits on perception, conditions of high perceptual load lead to reduced processing of unattended stimuli (Lavie et al., 2014). Accumulating work demonstrates the effects of visual perceptual load on visual cortex responses, but the effects on auditory processing remain poorly understood. Here we establish the neural mechanisms underlying “inattentional deafness”—the failure to perceive auditory stimuli under high visual perceptual load. Participants performed a visual search task of low (target dissimilar to nontarget items) or high (target similar to nontarget items) load. On a random subset (50%) of trials, irrelevant tones were presented concurrently with the visual stimuli. Brain activity was recorded with magnetoencephalography, and time-locked responses to the visual search array and to the incidental presence of unattended tones were assessed. High, compared to low, perceptual load led to increased early visual evoked responses (within 100 ms from onset). This was accompanied by reduced early (∼100 ms from tone onset) auditory evoked activity in superior temporal sulcus and posterior middle temporal gyrus. A later suppression of the P3 “awareness” response to the tones was also observed under high load. A behavioral experiment revealed reduced tone detection sensitivity under high visual load, indicating that the reduction in neural responses was indeed associated with reduced awareness of the sounds. These findings support a neural account of shared audiovisual resources, which, when depleted under load, leads to failures of sensory perception and awareness.
SIGNIFICANCE STATEMENT The present work clarifies the neural underpinning of inattentional deafness under high visual load. The findings of near-simultaneous load effects on both visual and auditory evoked responses suggest shared audiovisual processing capacity. Temporary depletion of shared capacity in perceptually demanding visual tasks leads to a momentary reduction in sensory processing of auditory stimuli, resulting in inattentional deafness. The dynamic “push–pull” pattern of load effects on visual and auditory processing furthers our understanding of both the neural mechanisms of attention and of cross-modal effects across visual and auditory processing. These results also offer an explanation for many previous failures to find cross-modal effects in experiments where the visual load effects may not have coincided directly with auditory sensory processing.