The process of understanding speech with lipreading

A while back I underwent a test called the IOWA-test at Briskeby. Briskeby is a state-funded research and education center for deaf and hard of hearing. The IOWA-test shows to what extent someone utilizes the technique of lipreading in the process of understanding verbal communication. I scored 50% (with no audio). I will post that and other documentations here as soon as I get around to prep them for this blog…What dawned on me today is that I use the same method for compensating my lack of hearing with the lipreading as I do with the audio that I’m capable of capturing.

To explain it I need to use the analogy of a computer that understand speech:

When I meet someone new, I immediately start recording their speech. Then I decode their speech: The intonation, the accent, the dialect, the volume, the size of their voice, the frequencies and more (than I’m aware of myself I’m sure).

This is a constant process in my interaction with people.

The recordings are put into a database (ie. my own memory). As someone speaks, I pick up similar pieces from the database to compare what I just heard with what my database has recorded. If it’s a probable match, I assume I perceived it correctly.

If it’s an unknown word of phrase, I record and save in my database after making sure I understood correctly.

If I’m mistaken, I correct my database, or add a new recording for the purpose of being able to compare that word spoken in that particular manner at a later time.

Ok, that’s the basics of the guesswork I do when audio isn’t 100% (which is almost never).

With the lipreading it’s a little simpler, but the process is the same. I record a video of facial expressions along with lip, jaw and tongue postions/movements.

When someone I know well, suddenly talks a different language, I always get huge problems in guessing what they say. It usually takes several repetitions and an additional explanation or translation before I’m able to put together all the segments of the word that I did not perceive.

The better I get to know people and the better they get to know how they can facilitate their speech for me, the better the rate of guessing correct by using pieces from my database.

For someone to facilitate speech for me means that they take a pause if something noisy comes by, like a streetcar or a bus. It means that they don’t cover their mouth with a cup of coffee or a glass or their hand etc. etc.

Nagging pays off

Last week I received the first letter from Rikshospitalet. They stated that I was eligible for a CI-operation, and that they guarantee that I get the operation before january 2010. Whee! NOT…

Well, actually it’s good news. One small step has been made.

Now I need to figure how to speed things up, I don’t plan to wait 3 years for my hearing to get back. First I’ll check rules for applying other hospitals in Norway, I hear Haukeland in Bergen has shorter waiting lists… Also I want to check Swedish, Danish and English hospitals…. But special rules apply for surgeries outside Norway. I’m probably not eligible for that unless Rikshospitalet exceed their own guarantee about operating on me before 2010. But will exhaust and check out all possibilities…

 In the meantime I’m trying hard to figure out how to live a “balanced” life. Maybe I’m trying too hard… Damn, I’m tired sometimes….