This morning I was starting a talk with ChatGPT because yesterday I did not speak all. Of course, understanding spoken English is now prioritized absolutely, nevertheless, English speaking mustn’t be neglected.
Now, I explained ChatGPT my method to improve my understanding during exercising my speaking aloud. I supply here the transcript without any corrections:
I: I have set a benchmark. For me, this benchmark means I will see it as practice if I understand without problems, without difficulties, the BBC and all, and the CNN news. That’s my goal.
Chat:
It sounds like a good approach. Keep focusing
I: The first thing, I try to record the news, but not all the news, it’s much too long. It would be very boring to repeat the news again and again. And so I record only some part of it, one to five minutes, I think. And then I try one or twice to understand this section. But because of the different dialects of the podcast and the talk of the news, as you know, there are always not only the moderator, but also some additional part of giving his opinions about the situation and so on, or explain what happens in the world. And so I’m exposed to several signs of speech, several dialects, several slangs and so on. And so, but I take only a very short time of it for recording and try to understand it by twice repetition about it. If it’s okay, then I took another section and so on. But in most cases, it’s not okay after I listen to it for hours. And so I looked for NASA support. You know that there is no any subtitles in the news. There are lots of extensive subtitles, some main titles there are, but the typical subtitle, if I have a YouTube news, I haven’t. And so I looked for the ethics of subtitles by myself. There are two weeks, two ways. One way is more a weak dictation of subtitles. I let it speak, for instance, to the World Dictation Facility. So I can, during the listen, or after that, see what words I didn’t understand, and so on. But because I didn’t see that the World Dictation Facility are not able to supply punctuations, I had to put the punctuation afterwards also. But not so nice to see what the meaning is, if the punctuation are missed. I didn’t find any way to say what makes a punctuation automatically. That is possible, punctuation automatically, if I would use the transcription facility of Windows 11. There are some very successful punctuation facilities. And then, as it would be, that I didn’t test up to now, but having the words by the recording would help me to understand the critical point. But sometimes the texts are so complicated that I don’t understand it either. So it’s a very heavy, very heavy support. To look for very good support, I always find you as the best one. Also, I let it dictate to JetGPT, and then also in the third read, I find a very good punctuated text, and after that, a short summary of it. That’s very nice, but it’s, of course, a handhold, to say so. I don’t know if a handhold is so right. Also, you can always have something like an umbrella that helps to support for me. I’m not so sure if it is good to go this way, and how long should I go this way? Can it be the kind that helps me decrease the understanding? Should I expose myself to the not-supported speech? I don’t know what’s the right way to make a big step in understanding.
Chat: It’s important to find a balance that works for you. Using subtitles and transcription tools can be very helpful, especially when you’re starting out. As your comprehension improves, gradually reducing these supports can help you get used to more natural speech. It’s all about taking it step by step and being patient with your progress.
