Low signal Morse

camerart

Star Member
Hi,  First post

I am learning Morse, and have been for a few years, but for me it takes time.

I am thinking also about a method of sending/receiving Morse when the signal is low.
I've hear of WSPR and the digital modes, and notice that they use a timer.
The method I'm thinking about would also use a timer, perhaps from GPS or PLL.
At each time 'start' either a DOT or a DASH would be expected, so listen in the noise for a TONE.
If there is some TONE, then this is either a DOT or DASH, so start recording.
If the TONE can also be detected longer than a DOT, then it is a DASH.
If a SPACE is detected then it must be 1 3 or 7 long.

This would be almost live, so only a short delay, but the TONE will be clear each end using this mthod.

Instead of me turning out my thoughts perhaps someone could help me with the system.

Let me know.
Cheers, Camerart.
 
The sort of thing you are talking about would be a hybrid of two systems.

Try reading up on QRSS Morse (Morse code with a very slow speed, specified by dit length rather than WPM), where transmissions may have three second long dits, or ten second long dits (they are the two main 'standard' types I know of). Reception is achieved by using a waterfall display, very often using SDR receivers.

The other system to read up on is coherent-CW. CCW as it's known for short relies on precise timing and two channels to allow noise cancelling (of sorts). It's too long winded to go in to here, just search for coherent-cw on the web and you will find LOTS of information. Off the top of my head the idea has been around for 25-30yrs or so now, and at one point it was VERY popular. I 'think' that operators using moon-bounce EME systems still use it (possibly).

There are non-Morse systems that would do much better for weak signal usage, since that's what they were designed for from the outset. QRSS CW is very good for weak signal working using Morse, but you could not really consider it a 'real-time' system as QSO's can take hours to complete, and even then it is only the most basic of information exchanged by the operators.

CCW can be considered a real-time system, but it can involve some specialist software to generate and decode the required signal channels. I would guess that maybe some kind of adapter might be constructed using a modem IC at its heart. If you look at the C4FM 'fusion' system by Yaesu, that is literally a modem IC used to produce four channels or carriers that will fit within the bandwidth allowed by a normal NBFM transceiver. So long as there are at least two channels or carriers then it is possible to carry out error correction (and noise elimination) within software and also electronically, by comparing the two 'carriers'.

There was some experimentation a number of years ago with some kind of synchronous CW system? I'm not sure if that was separate to CCW or just another 'flavour' of it (it was around 20-30yrs ago and I've been to sleep since then!)

As I said, read up on QRSS CW, and read up on coherent-CW for ideas.

73, Mark...



 
Hi M,
Thanks for your reply.

When I'm 'inventing' I try not to read previous ideas, so as not to corrupt any of mine, but if given a suggestion to read something, I have to or I've wasted the suggester's time.
I looked up both of your suggestions, and, as you say, this must have some elements of previous ideas, due to physics.  e,g, the timing must be precise, and the use of filters.

If it's possible to work through it from knowledge and guesswork, for me it would be preferrable.

The end result should be quite fast, say 20 WPM, and accept whatever distance is possibe, and be as live as possible.
Try to use more analog and less calculation.
Each end would have some hardware with coder/decoder on.
Each end would know when to listen for a tone.
The timing of the keyer should be converted to accurate say 20 WPM, with an indication for the keyer to correct the speed.  This speed is my goal, but a slower test speed could be used.

I hope for some timing, and filtering suggestions.

C.





 
The problem you have with the system you are suggesting is not so much when to listen, but how to hear/detect the TONE (as you refer to it).

Even with WSPR, the timing is controlled by GPS time reference simply so that all of the stations do not transmit at the same instance or receive at the same instance. The local GPS controlled clock 'tells' them when to listen, but the system is still at the mercy of the signal to noise ratio of the incoming transmission.

Because of the very narrow 'channels' involved a good deal of signal to noise improvement is gained, but this is at the expense of rate of the data that can be sent. Narrower bandwidth = lower baud rate. Narrower bandwidth = improved signal to noise.

If you look at the QRSS systems they can operate with incredibly low levels of signal, but the bandwidth is measured in millihetz and so the rate of the data sent has to be reduced accordingly, down to perhaps only one or two letters per minute!

So going back to your desired system overview, knowing WHEN to listen doesn't really help. Even if the system 'listens' at a particular time determined by some local clock say, you are still at the mercy of signal to noise with regards whether or not you can detect the TONE.

You have two choices possibly. One is to use very narrow bandpass filtering on receive, which would help to improve S/N ratio but would also limit the data rate. The other is to use a two-channel system as in Coherent CW which would allow for error correction /data detection on receive (at the expense of requiring a little more bandwidth).

73, Mark...
 
Hi B,
I'm not expecting miracles, and don't expect the distances that other systems use.
I haven't thought about bandwidth yet, and work with hopefully any ideas, that help get a better sound at both ends, that is better than the human ear.  Perhaps that's not possible?

I have no experience of what a poor S/N ratio sounds like at the point of it being too difficult to READ, hopefully morse operators will chip in.

C.
 
@camerart, don't take it the wrong way. It looks to me you want to reinvent the wheel. There are already similar digital protocols that can do the job, and most of them are open source. Wouldn't it be better to improve on any of them rather than "inventing" a new protocol? Besides, I don't understand if we are talking of manually keyed morse or computer generated/decoded, or a combination of the two.
 
Hi O,
When I was at school, I told my dad that I am inventing a rotary engine.  He said "If Ford can't do it then neither can you"  5 years later Wankel came out with with a rotary engined car.  (Engine invented in the 19th century) I didn't know this, but it told me that my thoughts are not wasted, as mine was not the same design.  This has been valuable in my career.  It has morphed into a linear, only battery charger for Hybrid cars.

I am not aiming to reinvent, but see if I can come up with an alternative, without cheating on old ideas.

This is to clean up 'live' Morse, which should make it more pleasant to work.
I don't know if a slight delay is acceptable, and is not jarring to the operators, perhaps someone can tell me.

The sender keys normally, and this is coded to accurate timed Morse.  At the other end if the signal is just too difficult to work, then hopefully this will decode it and give a clear 'string'

If this is a waste of time, after possitive efforts, it should soon become apparant, and I'll move on :)

C
 
Apples vs oranges, really. Car engine design and development is based on secrecy, patents and proprietary protocols, whereas digital communications are pretty much open standards implemented by, mostly, open source software (exception to this are semi-proprietary digital voice systems over FM).

 
Hi O,
Yes, not the same at all.  I shouldn't have included it really, but I was illustrating that I'm used to taking comments, that could be discouraging.

I accept that digital communications, can be standard, but 'hams' have changed things over the years, by experimenting, so I hope possitive messages are posted too, please.
C
 
camerart said:
I am not aiming to reinvent, but see if I can come up with an alternative, without cheating on old ideas.

This is to clean up 'live' Morse, which should make it more pleasant to work.
I don't know if a slight delay is acceptable, and is not jarring to the operators, perhaps someone can tell me.

The sender keys normally, and this is coded to accurate timed Morse.  At the other end if the signal is just too difficult to work, then hopefully this will decode it and give a clear 'string'
If you are not using the carrier itself i.e. true CW, then you would in effect be using a sub-carrier system. But I can't see how this would help. Unless your system was to morph in to a fully fledged digital mode (which would then allow for error correction in software), you will have just as much difficulty in pulling the intelligence out of the noise.

Other than coherent CW, or using narrow DSP filtering, the only other option would be phase locked reception usingf one of the tone decoder ICs available.

Now, although these can 'clean up' a signal, you have the issue of how long it takes the IC to lock as each character part is sent, and you will also find that either the device would false trigger (if the threshold was too low), or else it may keep trying to lock on to adjacent signals, which would again lead to false triggering.

The system that comes closest to what you are trying to do is the coherent CW one.

73, Mark...
 
Hi M,

So far, I'm not narrowing the method, but I doubt CW would be useful.

As you suggest a tone chip would be listening to the tone, 'say' 100kHz while trying to filter out anything else, which may take a fraction of the DOT repeatedly till a SPACE is heard.  If any tone appeared in the expected SPACE, then it is a DAH.  Even if there was no tone in the 'SPACE' but where the expected next DOT was a tone, then this could also be a DAH, which may be confirmed by timing?  Does this make any sense?
C.
 
camerart said:
As you suggest a tone chip would be listening to the tone, 'say' 100kHz while trying to filter out anything else, which may take a fraction of the DOT repeatedly till a SPACE is heard.  If any tone appeared in the expected SPACE, then it is a DAH.  Even if there was no tone in the 'SPACE' but where the expected next DOT was a tone, then this could also be a DAH, which may be confirmed by timing?  Does this make any sense?
Yes, but I can't see how it would work.

If I'm reading it correctly, you're saying a tone is present (for say the period of one dit), then if after this one dit period the tone is still being detected then the 'system' would interpret this as a dah. But then the whole thing falls over in your next sentence because you suggest that, even if there has been no tone detected for a one dit period, if a tone does then occurs this will be interpreted as a dah. :-*

What if the sender intended to transmit two dits? That would give a one dit period of tone, then a one dit period of silence, then finally another one dit period of tone again. Based on your system 'protocol' as you outlined it above, the two dits would be interpreted as a single dah. :-*

And as I keep repeating, it doesn't matter how clever the timing is, or what the coding involved is, unless you can hear (detect) the difference between a tone (carrier) being present or missing then you will never decode anything!

Other than coherent-CW which transmits two carriers that can be compared to one another, a possible alternative might be to take the real-time keying of the sender and then 'speed it up' (in a digital sense). That way rather than just sending say a single di-dah for "A" you could send a burst of 'digital' di-dahs, so the single character becomes repeated say five times in the same time period it would normally take to send the character once at 'normal' speed.

Doing that would allow the decoding system five chances to get the character correct. You could use say a bin system where the average energy channel could be overlaid five times making it easier to detect a signal being present or not (kind of how the QRSS waterfall systems work, only with more emphasis on real-time decoding).

Or you could simply try to 'listen' for the tone, with the system deciding the final character based on how many times it achieved the same decode of the five 'digital' repeats.

Either way it's a lot of effort, and most CW op's will tell you that the best filter/decoder is between your ears! I know from playing with various filters and enhancing systems over many years, that once you get to the point that the signal starts to become unreadable to a human operator then you will find that active decoders will also struggle.

It always comes back to the signal to noise ratio. You will always get to a stage where the noise beats the desired signal, that's the way the universe works. If the signal is above the noise then it will be readable by ear anyhow. The only way to reduce the noise energy is to go to a narrower filter so that you are listening to a smaller chunk of the spectrum (this is how ultimately the QRSS systems work, with millihertz wide channels and energy averaging).

In one of your prior posts you mention that:-
I am not aiming to reinvent, but see if I can come up with an alternative, without cheating on old ideas.

This is to clean up 'live' Morse, which should make it more pleasant to work.
I don't know if a slight delay is acceptable, and is not jarring to the operators, perhaps someone can tell me.

Essentially that would come back to filtering and/or using tone detection (read up on LM567, NE567, SL567's etc.)

Using a tone detector can give quite a narrow 'lock range' (i.e. narrow bandwidth), and IC's such as the ones above give a switched digital (logic) output, which could be used to key a local tone oscillator. The beauty of such an item is that the output tone can be set to anything the operator wants. It would be frequency independent of the incoming 'tone'.

Such IC's are used predominantly in Morse decoders in order to detect the keying pattern of the incoming Morse. The logic level switched output can then go on to be 'interpreted' by the decoder/reader.

However, other than creating a more 'sterile' version of the incoming signal you will still reach a limit determined by the noise on a channel, plus the signal audio no longer has any dynamic feel to it. It will sound very bland and 'sterile', not at all pleasant to listen to for long periods (I know this from experimenting with such a system a few years ago).

As ottavio mentioned, you seem to be trying to reinvent the wheel. Just creating a different way to do the same job that can be done in several other ways seems pointless. :-*

If your 'system' was to say give a 10dB advantage over current methods, great! I'm sure CW operators would snap it up. But for something which essentially gives the same performance, with no improvements other than possibly cleaning up the signal a little seems a lot of hard work for very little gain. :-X

Sorry to sound negative, but that's the reality of the situation as far as I can see it.

73, Mark...

 
Hi M,
Remember, I'm making this up as I go along from suggestions, to improve my thoughts.

And as I keep repeating, it doesn't matter how clever the timing is, or what the coding involved is, unless you can hear (detect) the difference between a tone (carrier) being present or missing then you will never decode anything!
  When listening to low signal, does it come and go at all?

The idea of speeding up each digit, is something similar to my thoughts about a fraction of a DOT, but I was talking about many sections of TONE, when you talk about, waiting for the whole digit and speeding that up.

Yes, 2x sections of TONE, could look either like 2x DOTs or 1x DAH but, for example if there was also TONE at GAP4 then it would mean something different.  I doubt that there's time to check for the sense of what the digit may be?

The end result of the receiver should be assembled, well timed, clear MORSE, not simply filtered out noise, so it would be good to listen to.

I was talking to a MORSE expert last night, and asked him what weak signals sounded like.  In his answer, he said, at training he was asked to listen to low signal Morse, sent in 3x different TONEs at the same time, and to pick which one he prefered and use that.  So perhaps the sections of TONE could be sent at varying frequencies, or TONEs?

C.
 
Camerart, interesting topic from an academic point of view, but I have to question your motives. What is it that you want to achieve? Implementing a new digital protocol or just improving the digital decoding of hand-generated morse (because we know that computer-generated morse is quite easy to decode even at subliminal signals)?

Having a clear goal in mind will determine the result but I can't see what the point is.
 
Hi O,
My motive is to modify, e,g, poorly timed and low S/N ratio Morse, and convert it to clear, well timed Morse for a better experience.
C
 
camerart said:
Yes, 2x sections of TONE, could look either like 2x DOTs or 1x DAH but, for example if there was also TONE at GAP4 then it would mean something different.

The end result of the receiver should be assembled, well timed, clear MORSE, not simply filtered out noise, so it would be good to listen to.
So pretty much serial data then? With a start and stop bit, maybe parity included too? And sandwiched in between either a binary 'code' to represent tone present (key down), or code absent (key up)?

Or else the other one that came to mind was Baudot Code https://en.wikipedia.org/wiki/Baudot_code

You asked:-
When listening to low signal, does it come and go at all?
On HF with signal fading, yes!

Also you said:-
So perhaps the sections of TONE could be sent at varying frequencies, or TONEs?
Well, that would be frequency shift keying or FSK, of which there are a number of schemes, some use one tone to represent a mark and another tone a space.

73, Mark...

 
Hi M,
I looked at those examples, you gave, but don't tend to dig too deeply, in case it biases my thoughts.

I am concentrating more on how 'say' a DIT/DAH could be sliced into sections and coded into the best form for transmission, so as the receiver is best able to pick out any sections of TONE, to be formed back into either a DIT or DAH.  Also when no TONE is heard, try to work out whether it is poor signal or a GAP.

If this all turns out to be a previous system, then I'll be quite happy with that.
C.
 
Hi M,
In a previous post you said "Either way it's a lot of effort, and most CW op's will tell you that the best filter/decoder is between your ears! I know from playing with various filters and enhancing systems over many years, that once you get to the point that the signal starts to become unreadable to a human operator then you will find that active decoders will also struggle.

It always comes back to the signal to noise ratio. You will always get to a stage where the noise beats the desired signal, that's the way the universe works. If the signal is above the noise then it will be readable by ear anyhow. The only way to reduce the noise energy is to go to a narrower filter so that you are listening to a smaller chunk of the spectrum (this is how ultimately the QRSS systems work, with millihertz wide channels and energy averaging).

I had an idea this morning regarding saw tooth TONE SIGNALS, and frequencies out of human hearing, but I did a quick check, and I think I have to concede that there have been many cleverer that me people working on this over the years, so I'll stop here, unless called.

Thanks for all replies, it was fun :)
C.
 
camerart said:
...I am concentrating more on how 'say' a DIT/DAH could be sliced into sections and coded into the best form for transmission, so as the receiver is best able to pick out any sections of TONE, to be formed back into either a DIT or DAH.  Also when no TONE is heard, try to work out whether it is poor signal or a GAP.
What you have described there could be one of many digital audio systems.

Your dit or dah 'tone' would be encoded within a digital transmission. The decoder would simply be reading the data. If the signal was strong enough to decode the data then the 'system' could easily decipher dits, dahs, and gaps. If the signal became weak to the point where decoding was no longer possible then straight away the 'system' would interpret this as total data loss (rather than just a gap in a transmission), and so you would have zero output from your digital receiver.

I seem to recall that when a group of Radio Amateurs were conducting tests on VHF and UHF with digital versus analogue audio systems they found that as the signal to noise ratio got worse (due to a weakening signal strength), beyond a certain point the digital system did give very much improved audio. But, they also found that once really weak signals were encountered the digital system became erratic in decoding with subsequent loss of segments of audio, but a human operator listening to an analogue transmission, even though also a very weak signal and of poor signal to noise ratio, was still able to make out and understand the conversation.

So if we were to split the signal strengths in to four groups, strong, average, weak, and very weak, it looks as though the digital system would be fine for the first three, even offering an improvement on weak signals when compared to analogue, once we arrived at the last group the human ear/brain combination still wins through.

As far as I am aware both D-STAR and C4FM digital systems were used during the tests.

73, Mark...
 
camerart said:
I had an idea this morning regarding saw tooth TONE SIGNALS, and frequencies out of human hearing, but I did a quick check, and I think I have to concede that there have been many cleverer that me people working on this over the years, so I'll stop here, unless called.

Thanks for all replies, it was fun :)
C.
No need to 'vanish' from the forum, there are many other things you can discuss you know! bd

Although the main 'genre' of the forum is Morse related we still have general sections too.

73, Mark...
 
Back
Top