Tinnitus stress management and other things CI

Spring_webSpring is right around the corner and the sun is shining upon us again 🙂 This winter has been put to very good use by yours truly. I’ve learned to hear again!
I consolidated with my fabulous girlfriend, and we have purchased an apartment together :-) 
I got a better relationship with my son, being able to really TALK with him. (and bringing the right responses, now that I’m not always tired out of my skull).
And I have taken up cross-country skiing again. It was more than 15 years since the last time I roamed the ski tracks in the Norwegian woods 🙂

6 month re-map milestone

I had a re-map a week back. That was my 6 month appointment. This time we increased the volume of all frequencies again. In addition to that, I have now one program with increased Dynamic range (70%) and one program with decreased Dynamic range (50%).

Dynamic Range is the range of frequencies that are accepted into the implant by the microphone and processor. 50% means that 25% of the frequencies are not used in either side of the frequency range. (meaning both very deep bass and very high pitch is reduced, keeping only the sounds in the middle.)

I had 60% the last few months. I use 70% mainly, find it much more pleasant to hear as much as possible. But it is tiring too, it takes time for the brain to develop noise filter skills. So the 50% program is used when I am tired and need to shield myself from some of the ambient noises around me.

blurred-reflections Ambivalent regarding current sound quality

I am not happy about the quality of sound… After this mapping it feels like I took one step back in terms of speech comprehension. When I switch back to my previous program (yes I kept that one, think it was a good move in terms of motivating myself) it just sounds really, really bad. So the new adjustments are definitely a step in the right direction, but it seems like every time re-mapping frequencies are done, my brain needs a new time of adjustment… Tiring, somewhat de-motivating and a little bit frustrating. But these issues are, by all means, peanuts compared to what lies behind me at this point. I am simply super-impatient. 🙂

The speech of others are super crisp, sharp. It is hard to get used to, after decades of dull, watery “cottonish” mumble-jumble. It is kind of like touching an area of skin that has been burnt; very sensitive, painful and unavoidable. The nerves that receives the electronic impulses from my implant feels raw and exposed.

Tinnitus and stress management

I’ve come a long way with my tinnitus management self-study course. I’ve learned to control the level of stress in me, both psychologically and physically. Yes, the stress is there. the stressors will never go away. BUT, I can better control my own stress-reactions, and I can get rid of the worst tension, and hence tiredness, irritation and other related symptoms are lessened.

The self-study course works like this: (very boiled down)

learn First you learn to relax muscles “manually”. When you have relaxed many, many times, you learn to do it quicker and easier. It’s like learning anything else. At first it’s a little hard and awkward, but when exercising and repeating enough, the results start appearing.

The key-words are muscle awareness and relaxing, then breathing technique. Breathing while stressed is short, shallow and chest originated, whilst breathing from stomach forces and enables for longer and deeper breaths, thus tricking your body into believing you’re not stressed. When relaxed all our breathing is originated from our abdominal region, our chest does not move much.

The concept is quite ingenious: when you and your body start to remember how to get to the relaxed state, you can embed some memory techniques to invoke that memory really fast (ie. a code word while breathing out slowly). By bringing that memory to the front of your consciousness, the pre-programmed and previously learned and experienced relaxation kicks in.

Personal gain and experiences

These days I am much more conscious about my level of stress. Every time I drive my car, I notice the stress coming (traffic is full of stressors). I use the time while driving in the car for training to get my stress down.

vision2I get stressed when playing an online-game, I train for getting rid of the stress then as well.

I feel like I’m getting better, psychosomatic. Less pains, more rested, I am healing faster (I dislocated my shoulder, and a week later I am almost without pains!) Last time I dislocated my shoulder, I needed physical therapy and did months of training to get painless again. (of course the damage was much more severe back then, but a dislocated shoulder is still a major pain :-)  )

I feel like I can endure more, but I am not sure if that is due to my stress management training alone. I guess it’s part that, part CI sound improving, part more daylight, sunshine and warmer temperatures 🙂

Things are improving, I’m penetrating the dark clouds slowly and surely, the blue sky is closer than ever 🙂

Unrealistic Expectations from the World? Audism?

What do people expect from me? They expect me to participate in social activities and to be part of the “common consciousness”. That is a fair expectation in my opinion. In this blog post I want to take a look at some circumstances and  obstacles concerning these expectations. I think it will be wise to read the definition of some of the words I use, they are Prayer-no-expectationlinked, as the word “expectation” was just linked.. That way we will be on the “same page”.

In this aspect I am thinking about what we expect and when we expect it in terms of my hearing progress. This is also a sore and difficult point on my behalf, since it is much about social interaction and how I am perceived socially. How I am viewed as a person.

As I’m going the path of CI rehabilitation and re-learning to hear, I am doing some discoveries about expectations of my recovery from both myself and others near and dear.

Me, a social outsider

All my life I’ve been a part of the hearing world, and thus a social outsider. Even among my closest friends and family, I got and still get, remarks and comments that hurt to the core of my being. I’m sometimes left with a feeling that people suspect me of WANTING to be isolated or withdrawn from issues that are talked about. I often feel misunderstood and misinterpreted. For instance my withdrawal from social events is sometimes being interpreted as a lack of interest, or attempt to socialize. That is so unfair and sad. I’ll explain why…

The more people talking at the same time, the more impossible it is for me to interact in a meaningful way. Believe me when I say I really wish I was able to interact with others on their terms, but there is a huge damage in my hearing that makes that incredibly hard. There is a limit to everyone’s mental capacity and endurance. My limit is shorter than most in terms of social interaction due to the nature of listening and understanding.

Read the rest of this entry »

Rewiring my brain – altering the language system?

brainscan

How a person understand and interpret speech is an extremely complex process involving the synaptic responses to the physics of sound, neural activity in the nervous system and brain, and ultimately the processing of those neural impulses in various regions of our brain.

Scientists have started mapping those regions of our brains by watching what happens inside the brain through MRI, PET or other means of scanning technology. Also, people who have had damage to their brains have helped us gaining more understanding about which parts of our brains does what.

Speech and language, meaning the vocal transfer of meanings, feelings, ideas, ideologies, experiences and everything else human beings exchange and communicate, are processed through various parts of our brain.

Some of the components of speech and language processing in our brains are about (I’m sure there are many, many more, each specializing in it’s own incredible way!)
1. acoustic processing
2. visual processing for lip reading
3. phonology
4. semantic processing (vocabulary)
5. short- and long term memory (previous context, experience, reference)
6. visuoauditory, meaning that the brain both processes and somehow merges each individual sensory input (bisensory – vision and hearing); keep in mind, we don’t fully understand everything about our brains functions yet.
7. contextual processing
8. “alternative contextual qualified guessing” (you might also call it fantasy 🙂  ) when all other understanding strategies fails, it’s the last attempt of understanding, and results in either a question, embarrassment or success

OK, that was the crash course in what we know about how our brains processes speech in a oversimplified manner.

Taking that info into account, think about what happens if the signals changes radically? What happens when a hearing-aid user, de facto deaf (unable to comprehend speech without the sensory aid of hearing aids, or contextual aid of sign language, lip reading or written text), is fitted with a CI or two?

In my case: what happens when I have suffered from “recruitment” while using my hearing aids for many years, and then suddenly both the recruitment is gone, AND the perceived frequencies have shifted totally out of it’s previously normal neural pathways starting with the hair cells in my cochlea.

My implant feeds electronic impulses to the part of my cochlea array of left-for-dead, broken hair-cells, while the previously still somewhat functional part now is left abandoned, not receiving any kind of stimuli anymore. (it’s like playing a piano on the octaves situated on the far left side for your whole life, and suddenly someone moves the entire piano so that you now sit on the far right!)

MEart1Well, obviously my brain has some work to do! The rewiring of the neural pathways are one thing, and the brains processing are another. I believe we can agree that the neural rewiring both in our nervous system, and in our brains (which I agree, is in fact part of our nervous system) is about new synaptic paths forming, adjusting our nervous system to the new sensory reality.

But what about the brains’ processing of these sensory inputs? The part of my brain that performs acoustic processing adjusts to the change in frequencies, the new auditory virtual reality slowly becomes THE reality, due to the lack of, and loss of the old auditory reality.

The phonology of all words have changed, how does my brain cope with that? Rewiring, relearning.

The short-term memory function now has to deal with input data that are totally new in appearance. It doesn’t sound like before. A streetcar doesn’t sound like a streetcar. A woman in high heels sound like a carpenter hammering down a nail. A kid laughing sounds like an animal dying. A kid crying sadly sounds like a anger fit.

Do you see where I’m getting at? The change in the quality of the sound perceived, also changes the contextual package, ie. what my brain interpret that specific sound to be, also decides my initial contextual and sometimes emotional processing. So now my contextual database also has to be reprogrammed.

The long term memory databank contains data that are now invalid. My mothers voice doesn’t match her voiceprint in my brain. All the people I have learned to identify by their speech patterns (how they pause, how they etc) now needs to be reprogrammed. It’s like having to change your entire music collection of vinyl to low quality compressed digital music (like computerized music in the MP3 format).

Borg I will forget the old information, and fill it up with the new. All as the “Borgs” in Star Trek says: “You will be assimilated.”

If the part of my brain that does the acoustic processing changes it’s algorithms, I assume it’s fair to expect a change in the output from that process, consequently leading to the fact that the part of my brain that is the recipient of of the processed audio, now being “re-digitalized”, also have to change THEIR algorithms!

In that way, my entire language system is presently under a complete and heavy and thorough modification.

I have noticed this in the following ways:

I can “hear” better, but I have problems remembering the first part of the sentence that I hear, OR I only perceive the first part, my brain skips the last part of a sentence. I deduct from this “brain rewiring hypothesis”, that my short-term memory is having trouble storing the strange sounding words in it’s flash memory. The input data kind of doesn’t fit properly.

The other parts of my language system also sometimes suffer from overload or fault, causing a crash. Like when ambient noise occurs, and the voice I listen to drowns in that noise, my contextual and visual processing brain part needs to take over, but since I have been so focused on the auditory processing (due to the new and strange sounding quality), the take-over comes just a little bit too slow to be able to follow the person talking…

Think of this last paragraph like trying to follow an intricate discussion about a complex issue while having two or three kids climbing all over you, demanding attention. Sometimes they DO get your attention, and what happens then with the discussion you were following?

That’s when my “alternative contextual qualified guessing” kicks into gear 🙂

And this time I won’t even get into the emotional and psychological aspect of this brain-rewiring process that I’m currently undergoing… 🙂  I think each and every one of you who reads this can imagine the psychological and emotional implications for yourself.

Some things are best left unsaid?

Decoding sounds from Cochlear Implants

In this informative video you can supposedly get an idea of what kind of work my brain will have to do in order to decode those artificial electronic impulses into meaningful sounds….

I wouldn’t know, if this is accurate or even if it’s true, as I haven’t been implanted yet, but would love to get comments from my CI-blog friends on this posting!

Bilateral CI research findings

I will post my findings on the issue of bilateral CI on my blog. Hopefully it helps someone else too…

I want to collect the data concerning this debate, in order to get an oversight of what the medical community discover, as well as what they are writing and thinking about this issue.

Papers found in PubMed:

Patients fitted with one (CI) versus two (CI+CI) cochlear implants, and those fitted with one implant who retain a hearing aid in the non-implanted ear (CI+HA), were compared using the speech, spatial, and qualities of hearing scale (SSQ) (Gatehouse & Noble, 2004). The CI+CI profile yielded significantly higher ability ratings than the CI profile in the spatial hearing domain, and on most aspects of other qualities of hearing (segregation, naturalness, and listening effort). A subset of patients completed the SSQ prior to implantation, and the CI+CI profile showed consistently greater improvement than the CI profile across all domains. Patients in the CI+HA group self-rated no differently from the CI group, post-implant. Measured speech perception and localization performance showed some parallels with the self-rating outcomes. Overall, a unilateral CI provided significant benefit across most hearing functions reflected in the SSQ. Bilateral implantation offered further benefit across a substantial range of those functions.
(Link to more information about this paper)

Speech perception tests were performed preoperatively before the second implantation and at 3 months postoperatively. RESULTS: Results revealed significant improvement in the second implanted ear and in the bilateral condition, despite time between implantations or length of deafness; however, age of first-side implantation was a contributing factor to second ear outcome in the pediatric population. CONCLUSION: Sequential bilateral implantation leads to significantly better speech understanding. On average, patients improved, despite length of deafness, time between implants, or age at implantation.
(Link to more information about this paper)

The average group results in this study showed significantly greater benefit on words and sentences in quiet and localization for listeners using two cochlear implants over those using only one cochlear implant. One explanation of this result might be that the same information from both sides are combined, which results in a better representation of the stimulus. A second explanation might be that CICI allow for the transfer of different neural information from two damaged peripheral auditory systems leading to different patterns of information summating centrally resulting in enhanced speech perception. A future study using similar methodology to the current one will have to be conducted to determine if listeners with two cochlear implants are able to perform better than listeners with one cochlear implant in noise.
(Link to more information about this paper)

The Let Them Hear Foundation have done their own research:

Despite many insurers’ (in the US; my comment) continued erroneous assertions to the contrary, bilateral cochlear implantation is NOT an experimental or investigational procedure, and is medically necessary.  Bilateral cochlear implantation in children has been an accepted, mainstream medical practice since 1998.  Over 3000 have been performed, including over 1600 on children.

Several studies have shown that there is a vast improvement in sound localization ability in patients with bilateral cochlear implants.  In particular, the group of subjects who received a significant amount of improvement when bilaterally implanted were those who were initially implanted at a very early age, as Andrew was.  In September 2005, an international consortium of cochlear implant specialists published an article in the widely respected journal “Acta Oto-Laryngologica” formally recommending that all children with permanent bilateral profound hearing losses receive bilateral cochlear implants.  A recent publication by industry-leading otologist Dr. Robert Peters stated that:

Provision of binaural hearing should be considered the standard of care for hearing-impaired patients whenever it can be provided without significant risks. In severe to profoundly hearing impaired individuals, this can only be provided with bilateral cochlear implantation when hearing aids are inadequate. In carefully selected candidates, the benefits derived are significant, the surgical procedures well tolerated, and negative effects infrequent in both children and adults.

A second recent paper by well-known communications disorder specialist Dr. Ruth Litovsky concluded that: Bilateral CIs can offer a combination of benefits that include better ear effects, binaural summation/redundancy effects and binaural unmasking. These effects have been illustrated in numerous patients world-wide; continued work in this field will no doubt lead to further improvements and increases in the size of each of these effects, for adults and for children.Please refer to the following publications for additional information.

Another medical benefit of bilateral cochlear implantation is that it has been shown to improve speech recognition in noisy environments.  It is expected that once that a patient’s hearing with the second cochlear implant in place is maximized, they will notice a significant improvement in understanding speech in noisy environments.  Comprehending speech amidst background noise occurs commonly in real-life situations, especially in classroom settings and learning environments, at the dinner table, or while talking in a car or on a plane.  Please refer to the following studies for more details:
read more from their conclusions here…..

My hearing diagrams

Finally got myself around to scan these charts and post them here 😀

First of all, here is a source of terminology and technical explanations related to sound.

From 2004:

hearing diagram 2004_fixed

The yellow “banana” is the speech discrimation area for normal hearing. Deafness is defined as below 85 dB. The measurement stops at 100 dB. On this measurement I was intent on doing the best I could, so I probably cheated by looking at signs in the face of the lady doing the test, through the looking glass from the booth I sat in…. Another problem is phantom sounds or echoes that appear from the test itself. Did I hear it or not? Was it a phantom sound or a real test sound? I have taken this test so many times that I quickly get the rythm of sounds from the audiologist and know when the sounds go up and down… I know their testing regime instinctively… Sad thing it only hurt myself when I cheated on the tests, the hearing aids were adjusted based on these results…

From 2006, 2 years later:

 

hearing diagram 2006_fixed

A noticeable drop in the 125 to 500 Hz-area (the deepest bass sound frequencies). This is the last test I took before commencing the long road to a CI in Norway. There is no doubt, the hearing is declining and I am clinically deaf, and have been for some time….

I also took a speech comprehension test (bottom chart on the 2006 diagram) and I think I scored 0%… What I heard I could only take a wild guess on….

What’s next?

Not before long I will post an abridged translation of a letter that I have sent to the Norwegian Treasury Department (Finansdepartementet). This is something I have been working on for some time now, in the wake of the budget cuts at the premier hospital i Norway; Rikshospitalet (ref. last postings regarding my interview on national tv etc). This work was also the reason why I needed “time” off from my blog (good thing the Easter came in the middle of this). Stay tuned friends!

Explaining the analogy: "Recruitment" of hair cells in cochlea

During my research into my own declining hearing- and health condition, I came across information about a phenomenon regarding hair cells in cochlea called “recruitment”. I strongly suspect “recruitment” is what happens to me. It certainly would explain a lot of the things that happen(ed) to me and my hearing and the fatigue…

(Most of the text that follows is copied from this page at hearinglosshelp.com and edited by myself for the sake of this blog and my readers.)

What is “Recruitment”?

Very simply, “recruitment” is when we perceive sounds as getting too loud too fast. How is it possible to hear too loud when the hearing in fact is vanishing, you may ask… Well, be patient with me and read on…

“Recruitment” is always a by-product of a sensorineural hearing loss. If you do not have a sensorineural hearing loss, you cannot have “recruitment”. In simple layterm this means that this condition only affects those who have a significant loss of hearing caused by haircell-damage in cochlea (mainly).

As a sidenote; there are two other phenomena that often get confused with “recruitment”. These are hyperacusis (super-sensitivity to normal sounds) and phonophobia (fear of normal sounds resulting in super-sensitivity to them). Both hyperacusis and phonophobia can occur whether you have normal hearing or are hard of hearing.

An analogy for understanding how “Recruitment” got its name

Perhaps the easiest way to understand “recruitment” is to make an analogy between the keys on a piano and the hair cells in a cochlea.

The piano keyboard contains a number of white keys while our inner ears contain thousands of “hair cells.” Think of each hair cell as being analogous to a white key on the piano.

The piano keyboard is divided into several octaves. Each octave contains 8 white keys. Similarly, the hair cells in our inner ears are thought to be divided into a number of “critical bands” with each critical band having a given number of hair cells. Each critical band is thus analogous to an octave on the piano.

Just as every key on the piano belongs to one octave or another, so also, each hair cell belongs to a critical band.

The requirements for “Recruitment” 

When you play a chord on the piano—you press two or more keys together but they send one sound signal to your brain. Similarly, when any hair cell in a given critical band is stimulated, that entire critical band sends a signal to our brains which we “hear” as one unit of sound at the frequency that critical band is sensitive to. This is the situation when a person has normal hearing.

However, when we have a sensorineural hearing loss, some of the hair cells die or cease to function. When this happens, each “critical band” no longer has a full complement of hair cells. This would be analogous to a piano with some of the white keys yanked out. The result would be that some octaves wouldn’t have 8 keys any more.

Our brains don’t like this condition at all. They require each critical band to have a full complement of hair cells. Therefore, just as any government agency, when it runs short of personnel, puts on a recruitment drive, so too, our brains do the same thing. But since all the hair cells are already in service, there are no spares to recruit.

Getting to the point – what “Recruitment” means

What our brains do is rather ingenious. They simply recruit some hair cells from adjacent critical bands. (Here is that word: recruit or recruitment.) These hair cells now have to do double duty or worse. They are still members of their original critical band and now are also members of one or more additional critical bands.

With only a relatively few hair cells dead, then adjacent hair cells may just do double duty. However, if many hair cells die any given hair cell may be recruited into several different critical bands, in order to have a full complement of hair cells in each critical band.

 

 

The results of the phenomenon known as “Recruitment” – the conclusion

The results of this “recruitment” gives us two basic problems. (notice the underlined parts!)

  1. The sounds reaching our brains appear to be much louder that normal. This is because the recruited hair cells still function in their original critical bands and also in the adjacent one(s) they have been “recruited” into.

    Remember that when any hair cell in a critical band is stimulated, the whole critical band sends a signal to our brains. So the original critical band sends one unit of sound to our brain, and at the same time, since the same hair cell is now “recruited” to an adjacent critical band, it stimulates that critical band also. Thus, another unit of sound is sent to our brains. Hence, we perceive the sound as twice as loud as normal.

    If our hearing loss is severe, a given hair cell may be “recruited” into several critical bands at the same time. Thus our ears could be sending, for example, eight units of sound to our brains and we now perceive that sound as eight times louder than normal. You can readily see how sounds can get painfully loud very fast! This is when we complain of our “recruitment”.

    In fact, if you have severe “recruitment”, when a sound becomes loud enough for you to hear, it is already too loud for you to stand.

  2. The second result of “recruitment” is “fuzzy” hearing. Since each critical band sends one signal at the frequency of that spesific critical band, when hair cells get recruited into adjacent bands, they stimulate each critical band they are a member of to send their signals also. Consequently, instead of hearing just one frequency for a given syllable of sound, for example, perhaps our brains now receive eight signals at the same time—each one at a different frequency.

    The result is that we now often cannot distinguish similar sounding words from each other. They all sound about the same to us. We are not sure if the person said the word “run” or was it “dumb,” or “thumb,” or “done,” or “sun,” or? In other words, we have problems with discrimination as well as with volume. If our “recruitment” is bad, our discrimination scores likely will go way down.

    When this happens, basically all we hear is either silence, often mixed with tinnitus or loud noise with little intelligence in it. Speech, when it is loud enough for us to even hear it, becomes just so much meaningless noise.

    This is why many people with severe recruitment cannot successfully wear hearing aids. Their hearing aids make all sounds too loud—so that they hurt. Also, hearing aids cannot correct the results of our poor discrimination. We still “hear” meaningless gibberish.

    However, people with lesser recruitment problems will find much help from properly adjusted hearing aids. Most modern hearing aids have some sort of “compression” circuits in them. When the compression is adjusted properly for our ears, these hearing aids can do a remarkable job of compensating for our recruitment problems.

Sudoku vs. cognition

What in the world could the term Cognition have to do with Sudoku? Well, let me explain…

For a Sudoku to be solved, you need to be able to learn, reason and remember numbers. Most of which has to do with the term “cognition” (click the word above for a precise terminology).

I learned about my own cognitive condition from doing a lot of Sudoku the past years. For instance I learned that having poor sleeping over longer periods made my Sudoku solving ability very poor. Also if I was plainly tired from a long day, my Sudoku skills suffered. Other things that made Sudoku hard for me to solve was the (for the time being) ever present fatigue, tinnitus and level of blood sugar.

After I became quite skilled in Sudoku, I recognized variations in my own mental performance. And soon it became apparent to me that my mental performance also followed certain patterns. And this is the interesting part that made me want to share this with my readers.

Sudoku taught me when I was tired in a time where I was always tired, if that makes sense??? It’s the fatigue-thing I’m talking about… How did THAT help me? Well, there was variations of tiredness over time. Some days I just couldn’t remember from 5 minutes earlier, or I had trouble concentrating on the task at hand (I have a special routine for solving them). And since I was all about getting better, noticing the good or bad days for Sudoku gave me an external method of measurement of my mental state in a period where my own built-in sensor needed calibration, so to speak 🙂

Sudoku taught me how to trust and USE my own sense of tiredness again. The feeling of tiredness is a signal to ourselves to slow down, to take a break, to eat and drink, to sleep or take a nap and so on…

Yes, I was truly f***ed up, I had lost the ability to heed the signals my own body and mind gave me… Sudoku helped me almost in a scientific way to regain that.

I continue to do Sudoku, allthough not as much as I used to, but it is still a fine tool for measuring my own cognitive skills. And I can recommend Sudoku to everyone as mental training. It has been and continues to be useful to me, not only as a tool for mental measurement but also as hobby that trains my cognitive skills somewhat… And we all could do with better brains, right?

What I hear (or what’s left)

It would be a good idea for me to put down some kind of status as to how my hearing is these days (as a baseline):

Without my hearing aids I can barely hear:

  • My son singing at certain notes at the top of his voice (gives me echo-effect on that frequency until I hear new sounds)
  • A tractor right outside my windows (5 meters away)
  • Only the bass from music

keep-silence With hearing aids in quiet surroundings I hear:

  • Well enough to understand spoken words with the aid of lipreading (better if my head is clear and rested)
  • When really silent: a noisy refrigerator, traffic noise outside the building, an airplane or helicopter in the sky. I get a “white noise” sound from running water.
  • My external hard-drive – the spinning disks vibrate into the wooden table.
  • Other peoples voices in the room, but cannot understand without lipreading.
  • Familiar voices on the mobile for short conversations and messages. I most often have to repeat and ask for confirmation. It’s border-line.
  • Other peoples footsteps in same building, maybe a slamming door.
  • Static noise from electrical FM-devices like my Phonak Smartlink

With hearing aids in a “quiet cafe” surrounding I can hear:

  • Spoken word if not more than 1 meter away, but I have to concentrate really hard
  • Other people speaking, but cannot make out what is said.
  • Music, but only in the form of unrecognised sounds…

 With hearing aids in noisy surroundings I hear:1728

  • All sounds are garbled and mixed in an impossible soup of noise
  • I can extract a voice from 50 cm away if noise isn’t too bad, and I know the subject and the person (if I’m used to lipread whomever, it’s a better chance of understanding)
  • Cars and trucks travelling at high speed close by me
  • Dogs barking loud

When waching a movie with sound directly into my hearing aids I am dependant on captioning. Environmental sounds like running water (splashing), wind blowing, birds chirping etc are lost completely. Spoken words are not understood at all without captioning (dialogue is most often switching and camera angles changing too fast for lip-reading to be effective enough).

Music has lost it’s magic during the last few years. I can sense the rythm, and hear most of the bass and drums. Percussion is completely gone. Perception of vocals depends on type of music and what tone the voice has. Guitar has disappeared slowly last few years, now it’s not “swinging” at all anymore…

I wrote down this, because I want to use it to compare later when I get the CI (my personal baseline).

Making sense of the world through a cochlear implant

PET20YEAROLD_HIGH March 13, 2007 –  Scientists at University College London and Imperial College London have shown how the brain makes sense of speech in a noisy environment, such as a pub or in a crowd. The research suggests that various regions of the brain work together to make sense of what it hears, but that when the speech is completely incomprehensible, the brain appears to give up trying.

The study was intended to simulate the everyday experience of people who rely on cochlear implants, a surgically-implanted electronic device that can help provide a sense of sound to a person who is profoundly deaf or who has severe hearing problems.

Using MRI scans of the brain, the researchers identified the importance of one particular region, the angular gyrus, in decoding distorted sentences. The findings are published in the Journal of Neuroscience.

In an ordinary setting, where background noise is minimal and a person’s speech is clear, it is mainly the left and right temporal lobes that are involved in interpreting speech. However, the researchers have found that when hearing is impaired by background noise, other regions of the brain are engaged, such as the angular gyrus, the area of the brain also responsible for verbal working memory – but only when the sentence is predictable.

“In a noisy environment, when we hear speech that appears to be predictable, it seems that more regions of the brain are engaged,” explains Dr Jonas Obleser, who did the research whilst based at the Institute of Cognitive Neuroscience (ICN), UCL. “We believe this is because the brain stores the sentence in short-term memory. Here it juggles the different interpretations of what it has heard until the result fits in with the context of the conversation.”

brainxrayThe researchers hope that by understanding how the brain interprets distorted speech, they will be able to improve the experience of people with cochlear implants, which can distort speech and have a high homer-simpson-wallpaper-brain-1024level of background noise.

“The idea behind the study was to simulate the experience of having a cochlear implant, where speech can sound like a very distorted, harsh whisper,” says Professor Sophie Scott, a Wellcome Trust Senior Research Fellow at the ICN. “Further down the line, we hope to study variation in the hearing of people with implants – why is it that some people do better at understanding speech than others. We hope that this will help inform speech and hearing therapy in the future.”