Wednesday 29 January 2014

The Microphone

                                                                 
This topic is a reflection on what happened and tends to happen when listening to a person using a microphone, in this case it was a really senior speaker. On top of that, I was prompted to write about this because at this staff meeting, our sign language interpreter was not available which meant that I had a limited range of options: either I boycott the meeting citing the lack of access. I decided that this route would be political suicide as most of the staff, except the speaker, know that I can hear a bit, and can manage fairly well with many people in face-to-face conversations, although there are some people who are not possible for me to follow even with lipreading.
Even when sitting in the front row, or second row, there are problems for an oral deaf person. I am an ‘oral deaf’ person because I am deaf but with the aid of hearing-aids I can hear fairly well, but not naturally. I will come back to the kind of sound that hearing-aids reproduce later. The oral part of this identity refers to the use of spoken language, as I have said, I can hear a fair amount and I speak well. But for oral deaf people in the audience, there are still difficulties in following in meetings especially when a microphone is used by the speaker. This may seems paradoxical, surely a microphone would be fantastic for me. Let me go through what a microphone can do and cannot do for me as an oral deaf person.
Although it would be a reasonable assumption to make that this is the best seat in the house for a deaf person with hearing-aids, or a deaf person with deaf-aids (an interpreter) there are still challenges. For example, I found that in this meeting, that the microphone was just the right place. Being able to see the speaker’s lips is to me more important that hearing the person through the sound system because they are standing close to the microphone. It is a misconception that I can hear better through a sound system, and I have plenty of experience in meetings, at church, at functions, award events, weddings, funerals of microphones. In actual fact, the proximity to the speaker is more important to me than the microphone, and its often excessive distorted volume since I can hear their words better when this is supplemented by their clear type (like Microsoft ‘clear type’ font, but the default ‘clear sound’) speech and seeing their lips.  In other words, this real-time streaming of words via sound and sight without the intervention of technology is usually, and ironically the best way for me to follow.
However, the moment that the speaker shifts their body a little and the microphone obscures his/her face, and predominately in front of their mouth, I suffer from a breakup in the signal and the communication weakens. I can only compensate for this by a small amount by shifting my posture in the chair to see their mouth again. This has to be done subtly, as the people behind will have their view  interrupted, which is at best mildly annoying and impolite to squirm in front of others, or I may be inhibiting the view of another person who is reliant on seeing the speakers lips, another deaf person, or has a mild hearing loss. In which case, they will be really irritated with my movement that caused them to lose their hold on the speaker’s words. Sometimes, this can look funny when the members of the audience move along together with the speaker’s movements. At least this shows whose attention the speaker has, and who is left behind. But this can be an embarrassing for those who are left out. While on this point, the speaker who talks and talks, pacing up and down and moving around away from the podium/lectern is committing the cardinal sin of poor etiquette for the oral deaf in the audience. It is not possible to see the person’s mouth when they are moving around, and turning away, and the constant focussing on the person moving is really distracting, even though this is fine for hearing people to follow because the words follow the microphone, but I am usually lost with the person who moves excessively. Remember that the light that falls on the speaker is best at the microphone set up at the lectern, and when the speaker moves around, then the light is not optimum, and frequently I cannot see the speaker’s mouth because the light is now behind the speaker, and blots out the critical details of the face and therefore the lip movements. Lip reading is effective only to no more than 15 metres. Beyond that distance, the signal degrades so a speaker should not wander beyond this visual range.  My message here is: ‘Please stay still so I can see you speak’. And also watch out for the overuse of hand movements and gestures that can be very distracting when I am watching. As well as keeping your hand/s away from your mouth when speaking, otherwise I will miss out on the words and just hear a mumbled sound.
Now, looking at the impact of the microphone on the sound quality, it would be fair to say that this is a mixed blessing. Depending on the type of microphone, a big microphone on a stand, or a lapel microphone the positioning of the microphone does affect the quality of the sound, generally the big old-fashioned mikes are better but these tend to be used incorrectly: many speakers speak too close to the microphone and speak much too fast. This causes two problems for me. Firstly, apart from not being able to see their mouth, there is the problem that the sound is distorted by being to close. Secondly the common mistake is to speak in exactly the same volume, pitch and pace as normal for talking to a room of people. For me, the speaker’s clarity will be greatly enhanced when the microphone is not right next to their mouth, to eliminate the distortions, and to avoid the speaker either shouting into the microphone or whispering. This is caused by the speaker’s over-reliance on the technology to carry their voice to everyone, in other words, the speaker just has to speak and the technology will do the rest. But if the speaker is more aware of how they sound, and the impact of their voice through the microphone them this can be fine-tuned out. I have seen novice speakers change their voice when using a microphone, and for the worse. This is the consequence of nerves. Their fear of public speaking is heightened when a microphone is used. Everyone is looking at you. So, the voice often tightens up and is a rushed stream of higher than normal speech, the sooner this is over the better the speaker feels.  Instead, the change of voice that is more effective when using a microphone, for hearing-aid users in particular because everything gets amplified, including the bad with the good, is to slow down, enunciate each word fully, especially at the end of sentences. Mumbling words at the end of a sentence/point is so frustrating. Imagine how the hearing-aided person feels to be following until the last word is garbled out. It is not my place to keep asking the speaker to repeat. And when it is just the last part/word that is important or possibly unimportant?, how frustrating this practice becomes. For example, good news readers never mumble their words, and every word counts. Just remember to slow down to enough to say everything. And drop the pitch down an octave to allow the pacing to be slower but more measured and controlled. I have found typically a lower pitch carries better through a microphone than a higher pitch. And lapel microphones tend to pick up the deeper pitch better than the hand microphones, and I am sure that not holding a microphone also contributes to the speaker speaking in a more normal pitch of voice as it is a less intimidating. I am saying that having a ‘microphone-voice’ voice is a valuable skill for academics, to have and this comes with the awareness of one’s own microphone-ed voice and with practice until this is regular practice.
There are some other issues that apply to the members of the audience with hearing loss, and with hearing-aids for the speaker to be aware of. Since I have to focus on the speaker’s mouth for extended periods with a really focussed eye-gaze, which in itself can be intimidating to the speaker: why is this person/people staring at me so much, that is rude! No, I am really trying to follow everything you are saying. If the speaker knows who needs to lip-read deliberately, as opposed to the causal watching of most hearing people in the audience then this concern will dissipate. At the same time, this causes two problems for the ultra-attentive viewer, it is both exhausting to focus so intensely for  long periods, and often this is accompanied with repair work to the sentences that were not heard properly, mis-heard, or information was simply absent. So there is an ongoing simultaneous process in the viewer’s mind of repair the speaker’s speech so that it makes sense. On top of that, the viewer, like me, is expected to think about the words and put forward intelligent questions or comments. That is the expectation that is typical of academic and formal discourse in a meeting in various forms. The difficulty for me is that there is simply not enough time to process all the information, and correct it where there are errors, and still come up with a response that shows my comprehension of the speaker’s point/s in an articulate and intelligent way. Therefore, the slower pace of the speaker, when using a microphone is a powerful strategy for accommodating audience members like me.  It gives me time to catch up, collect my thoughts and to respond with an appropriate comment, even if I do not say this out aloud.  Before I leave this point, it needs to be added that this focused listening and watching is both physically and mentally tiring. A pause every now and again to look away and recover is useful way of extending the meeting without losing my grasp of the session. For example, as I get more tired with maintaining eye-contact to keep up, the more easily I am distracted by anything that is happening around us in the venue. This distraction has two main forms, the auditory distractions, such as someone’s cell phone ringing, or a bus driving passed. Remember that hearing aid technology amplifies everything, and everything has equal value, so I am always trying to work out if this new sound is important or not. The brain has a natural ability to tune out background noises quickly and effectively. This is a processing capability that I lack, hence, every sound is potentially a major distraction to me. And as I become more tired from focused attention, the less successful I am at ignoring these distractions. And when I avert attention to the noise, I lose out on the speaker because I looked away, and need to look again and catch up. Sometimes, and speakers need to know this, the noise drowns out the speaker, a loud plane overhead, and the speaker can usually carry on because most people with normal hearing can still follow in and through the noise. I cannot. So pausing for a moment till the noise is gone will be more effective than losing me and then repeating, if I ask the speaker to repeat because I missed out. Bear in mind that these extra noises are more than a minor distraction and irritation to me. This is often a wave of noise that hearing listeners are adept at riding, but this is alarming for me as this is a wave that I cannot surf. Instead, this wave crashes all over me and throws me around. It really is disorientating experience. And hearing-aids are not the surfboard for riding these waves.
The second kind of noise is the visual noise. During the last meeting, I was distracted by the long banners that were flapping in the wind on the left above of the speaker’s head. Again, as I get more and more tired, the visual distractions become more tempting and impede on my visual field of vision for attention. Once I know what is moving and how and why, then I can try to tune this stimulus out, provided that it does not keep interfering with alarming movements. Either a movement is annoying because it is sudden, or a repeated movement that bugs you, just like a clicking pen, or a tapping foot. When the speaker is aware of the visually distracting elements in the meeting, and restores the calm by closing windows or removing the flapping, waving, movements, then I am in my happy space of attentively listening and looking at them.     
Bringing this back to microphones, it is worth knowing that hearing aids and microphones are electronic devices and generate/process sound electronically. This means that is sounds different to natural sound, and has its place. We tend to expect too much of microphones, and forget that for some people, like me, this sound is going to be reprocessed as electronic sound and leads to the  double loss of fidelity that I experience. I discovered this phenomenon in the new lecture theatre with the sound system there. When I sat out of lip-reading range (approximately less than 10 m away from the speaker) I could not follow the speaker, even though the volume was not a problem. It was loud enough for everyone, but with at least 4 speakers (the devices, not the people) play the voice of the professor at the same time, this became a garbled blur of sound as the sound from each speaker overlaps and interferes with each other. Thus, I caught only a few words. Can you imagine how dangerous it is to process on the basis of a few words, even though you were there and heard everything. But in reality, only a few isolated words were clear to me. For people without hearing aids, this use of technology is a non-event. But for hearing-aid users, this layer of one technology (microphones) on top of another layer of technology (hearing-aids) creates a sound barrier.  Therefore, being in visual range of the speaker is imperative.
But, there is still a problem. When meeting has many speakers, such as questions or comments from the floor, it can be really difficult to follow. I know that there are people that are particularly aware of my needs, and they speak clearly, and I can see them, remember to give time for people to see you and make eye contact, I need this so I can follow. There is nothing worse than a weak voice from the back behind a pillar. If it is standard practice to come closer and use a microphone, or stand close to me, I will be at the front anyway, then I am really relieved to have been included in this discussions without making a scene. Of course, some people prefer not to use a microphone, but want to say something off-the-cuff. And I may still miss this information, it would really help me when the speaker summarises the speakers comment, for all of us, and for me. This is extra work for the speaker to do, but it ensures that the speaker has also heard the comment accurately and has understand the point. Or to make a note so I can read about the points made in the session. Not all meetings are minuted, and I understand that writing something down makes it more formal, so people tend not to say anything that could/would be written down. Perhaps if this is explained that all comments are written down on a non-prejudice basis, so that these cannot be used again them, this would help with the content of the meeting. Then I know if and what I missed, and can ask specific questions based on this information.                                    
It is hoped that this information about the limitations and good practices with microphones will be useful to building awareness of the needs of hearing-aided members in the audience. Sometimes I have a Sign Language Interpreter there, and this ameliorates many of the difficulties associated with microphones, especially if the sound quality is too poor for me to pick up, or that I cannot lip-read. But I cannot lipread and watch the interpreter at the same time as it is physically and linguistically impossible. When an interpreter is there, then I focus on the signing more than the speaker, and gain a lot of information. It needs to be said that this is still tiring. The next blog will look at how and why I use a sign language interpreter. And some ‘do’s and don’ts’ for academic staff members to bear in mind when using a sign language interpreter.   


Guy Mcilroy      

No comments: