camerart said:
As you suggest a tone chip would be listening to the tone, 'say' 100kHz while trying to filter out anything else, which may take a fraction of the DOT repeatedly till a SPACE is heard. If any tone appeared in the expected SPACE, then it is a DAH. Even if there was no tone in the 'SPACE' but where the expected next DOT was a tone, then this could also be a DAH, which may be confirmed by timing? Does this make any sense?
Yes, but I can't see how it would work.
If I'm reading it correctly, you're saying a tone is present (for say the period of one dit), then if after this one dit period the tone is still being detected then the 'system' would interpret this as a dah. But then the whole thing falls over in your next sentence because you suggest that, even if there has been no tone detected for a one dit period, if a tone does then occurs this will be interpreted as a dah. :-*
What if the sender intended to transmit two dits? That would give a one dit period of tone, then a one dit period of silence, then finally another one dit period of tone again. Based on your system 'protocol' as you outlined it above, the two dits would be interpreted as a single dah. :-*
And as I keep repeating, it doesn't matter how clever the timing is, or what the coding involved is,
unless you can hear (detect) the difference between a tone (carrier) being present or missing then you will never decode anything!
Other than coherent-CW which transmits two carriers that can be compared to one another, a possible alternative might be to take the real-time keying of the sender and then 'speed it up' (in a digital sense). That way rather than just sending say a single di-dah for "A" you could send a burst of 'digital' di-dahs, so the single character becomes repeated say five times in the same time period it would normally take to send the character once at 'normal' speed.
Doing that would allow the decoding system five chances to get the character correct. You could use say a bin system where the average energy channel could be overlaid five times making it easier to detect a signal being present or not (kind of how the QRSS waterfall systems work, only with more emphasis on real-time decoding).
Or you could simply try to 'listen' for the tone, with the system deciding the final character based on how many times it achieved the same decode of the five 'digital' repeats.
Either way it's a lot of effort, and most CW op's will tell you that the best filter/decoder is between your ears! I know from playing with various filters and enhancing systems over many years, that once you get to the point that the signal starts to become unreadable to a human operator then you will find that active decoders will also struggle.
It always comes back to the signal to noise ratio. You will always get to a stage where the noise beats the desired signal, that's the way the universe works. If the signal is above the noise then it will be readable by ear anyhow. The only way to reduce the noise energy is to go to a narrower filter so that you are listening to a smaller chunk of the spectrum (this is how ultimately the QRSS systems work, with millihertz wide channels and energy averaging).
In one of your prior posts you mention that:-
I am not aiming to reinvent, but see if I can come up with an alternative, without cheating on old ideas.
This is to clean up 'live' Morse, which should make it more pleasant to work.
I don't know if a slight delay is acceptable, and is not jarring to the operators, perhaps someone can tell me.
Essentially that would come back to filtering and/or using tone detection (read up on LM567, NE567, SL567's etc.)
Using a tone detector can give quite a narrow 'lock range' (i.e. narrow bandwidth), and IC's such as the ones above give a switched digital (logic) output, which could be used to key a local tone oscillator. The beauty of such an item is that the output tone can be set to anything the operator wants. It would be frequency independent of the incoming 'tone'.
Such IC's are used predominantly in Morse decoders in order to detect the keying pattern of the incoming Morse. The logic level switched output can then go on to be 'interpreted' by the decoder/reader.
However, other than creating a more 'sterile' version of the incoming signal you will still reach a limit determined by the noise on a channel, plus the signal audio no longer has any dynamic feel to it. It will sound very bland and 'sterile', not at all pleasant to listen to for long periods (I know this from experimenting with such a system a few years ago).
As ottavio mentioned, you seem to be trying to reinvent the wheel. Just creating a different way to do the same job that can be done in several other ways seems pointless. :-*
If your 'system' was to say give a 10dB advantage over current methods, great! I'm sure CW operators would snap it up. But for something which essentially gives the same performance, with no improvements other than possibly cleaning up the signal a little seems a lot of hard work for very little gain. :-X
Sorry to sound negative, but that's the reality of the situation as far as I can see it.
73, Mark...