How a person understand and interpret speech is an extremely complex process involving the synaptic responses to the physics of sound, neural activity in the nervous system and brain, and ultimately the processing of those neural impulses in various regions of our brain.
Scientists have started mapping those regions of our brains by watching what happens inside the brain through MRI, PET or other means of scanning technology. Also, people who have had damage to their brains have helped us gaining more understanding about which parts of our brains does what.
Speech and language, meaning the vocal transfer of meanings, feelings, ideas, ideologies, experiences and everything else human beings exchange and communicate, are processed through various parts of our brain.
Some of the components of speech and language processing in our brains are about (I’m sure there are many, many more, each specializing in it’s own incredible way!)
1. acoustic processing
2. visual processing for lip reading
4. semantic processing (vocabulary)
5. short- and long term memory (previous context, experience, reference)
6. visuoauditory, meaning that the brain both processes and somehow merges each individual sensory input (bisensory – vision and hearing); keep in mind, we don’t fully understand everything about our brains functions yet.
7. contextual processing
8. “alternative contextual qualified guessing” (you might also call it fantasy 🙂 ) when all other understanding strategies fails, it’s the last attempt of understanding, and results in either a question, embarrassment or success
OK, that was the crash course in what we know about how our brains processes speech in a oversimplified manner.
Taking that info into account, think about what happens if the signals changes radically? What happens when a hearing-aid user, de facto deaf (unable to comprehend speech without the sensory aid of hearing aids, or contextual aid of sign language, lip reading or written text), is fitted with a CI or two?
In my case: what happens when I have suffered from “recruitment” while using my hearing aids for many years, and then suddenly both the recruitment is gone, AND the perceived frequencies have shifted totally out of it’s previously normal neural pathways starting with the hair cells in my cochlea.
My implant feeds electronic impulses to the part of my cochlea array of left-for-dead, broken hair-cells, while the previously still somewhat functional part now is left abandoned, not receiving any kind of stimuli anymore. (it’s like playing a piano on the octaves situated on the far left side for your whole life, and suddenly someone moves the entire piano so that you now sit on the far right!)
Well, obviously my brain has some work to do! The rewiring of the neural pathways are one thing, and the brains processing are another. I believe we can agree that the neural rewiring both in our nervous system, and in our brains (which I agree, is in fact part of our nervous system) is about new synaptic paths forming, adjusting our nervous system to the new sensory reality.
But what about the brains’ processing of these sensory inputs? The part of my brain that performs acoustic processing adjusts to the change in frequencies, the new auditory virtual reality slowly becomes THE reality, due to the lack of, and loss of the old auditory reality.
The phonology of all words have changed, how does my brain cope with that? Rewiring, relearning.
The short-term memory function now has to deal with input data that are totally new in appearance. It doesn’t sound like before. A streetcar doesn’t sound like a streetcar. A woman in high heels sound like a carpenter hammering down a nail. A kid laughing sounds like an animal dying. A kid crying sadly sounds like a anger fit.
Do you see where I’m getting at? The change in the quality of the sound perceived, also changes the contextual package, ie. what my brain interpret that specific sound to be, also decides my initial contextual and sometimes emotional processing. So now my contextual database also has to be reprogrammed.
The long term memory databank contains data that are now invalid. My mothers voice doesn’t match her voiceprint in my brain. All the people I have learned to identify by their speech patterns (how they pause, how they etc) now needs to be reprogrammed. It’s like having to change your entire music collection of vinyl to low quality compressed digital music (like computerized music in the MP3 format).
I will forget the old information, and fill it up with the new. All as the “Borgs” in Star Trek says: “You will be assimilated.”
If the part of my brain that does the acoustic processing changes it’s algorithms, I assume it’s fair to expect a change in the output from that process, consequently leading to the fact that the part of my brain that is the recipient of of the processed audio, now being “re-digitalized”, also have to change THEIR algorithms!
In that way, my entire language system is presently under a complete and heavy and thorough modification.
I have noticed this in the following ways:
I can “hear” better, but I have problems remembering the first part of the sentence that I hear, OR I only perceive the first part, my brain skips the last part of a sentence. I deduct from this “brain rewiring hypothesis”, that my short-term memory is having trouble storing the strange sounding words in it’s flash memory. The input data kind of doesn’t fit properly.
The other parts of my language system also sometimes suffer from overload or fault, causing a crash. Like when ambient noise occurs, and the voice I listen to drowns in that noise, my contextual and visual processing brain part needs to take over, but since I have been so focused on the auditory processing (due to the new and strange sounding quality), the take-over comes just a little bit too slow to be able to follow the person talking…
Think of this last paragraph like trying to follow an intricate discussion about a complex issue while having two or three kids climbing all over you, demanding attention. Sometimes they DO get your attention, and what happens then with the discussion you were following?
That’s when my “alternative contextual qualified guessing” kicks into gear 🙂
And this time I won’t even get into the emotional and psychological aspect of this brain-rewiring process that I’m currently undergoing… 🙂 I think each and every one of you who reads this can imagine the psychological and emotional implications for yourself.
Some things are best left unsaid?