Saturday, February 6, 2016

KX3 Remote Control and audio streaming with Raspberry Pi 2

REMOTE CONTROL OF ELECRAFT K3

I wanted to control my Elecraft KX3 transceiver remotely using my Android Phone.  A quick Internet search yielded this site by  Andrea IU4APC.  His KX3 companion application on Android allows remote control using Raspberry Pi 2 and he has also links to an audio streaming application called Mumble.

I did a quick ham shack inventory of hardware and software and realized that I had already everything required for this project.

A short video how this works is in YouTube:



KX3, Raspberry Pi2 and Android Phone connected together over Wifi.


























HARDWARE COMPONENTS

Elecraft KX3
Elecraft KXUSB Serial Cable for KX3
Raspberry Pi 2 with Raspbian Linux. I have 32 GB SD memory card, 8 GB should also work.
Behringer UCA202 USB Audio Interface  and audio cables
Android Phone  (I have OnePlus One)


CONFIGURE RASPBERRY PI AND KX3 COMPANION APP

Following the instructions I plugged the KXUSB Serial cable to the KX3 ACC1 port and to one of the two Raspberry Pi USB ports.

I installed ser2net with following commands on command line:

sudo apt-get update 
sudo apt-get install ser2net 

then I edited the /etc/ser2net.conf file:

sudo nano /etc/ser2net.conf 

and added the following line:

 7777:raw:0:/dev/ttyUSB0:38400 8DATABITS NONE 1STOPBIT

and saved the file by pressing CTRL+X and then Y

I executed the ser2net:

ser2net 
sudo /etc/init.d/ser2net restart 

Once done with the host I downloaded the KX3 Companion app (link here) on my Android phone and opened the app.

To enable the KX3 Remote functionality you have to edit 3 options (“Remote Settings” section). Check the “Use KX3Remote/Piglet/Pigremote” option

 

Set your PC/Raspberry Pi IP address in the “KX3Remote/Piglet/Pigremote IP” option.  This below assumes that your RPI and Android phone are connected to the same Wifi network.

In my case RPI is using WLAN0 interface connected to WiFi router and IP address is 192.168.0.47.  This address depends on your local network configuration and you can get the Raspberry Pi IP address using command

ip addr show 





















Set the choosen Port number (7777) on the PC/Raspberry Pi IP address in the “KX3Remote/Piglet/Pigremote Port” option




















Now you can test the connection. By tapping "ON" button on the left top corner you can see if the connection was successful. A message "Connected to Piglet/Pigremote" should show up at the bottom - see below:




















If you are having problems with this, here are some troubleshooting ideas

  • check the Raspberry Pi IP address again
  • check that Raspberry Pi and Android Phone are on the same Wifi network
  • check that your KX3 serial port is set to 38400 bauds (this is the default in KX3 Companion App) 
If everything works, you should be able to change the frequency and the bands on KX3 by tapping  Band+/Band- and Freq+/Freq- buttons on the app. Current KX3 frequency will be updated on FREQUENCY field between buttons as you turn the VFO on KX3.


CONFIGURE RASPBERRY PI 2 FOR AUDIO 

Plug in USB Audio Interface to Raspberry Pi 2 USB port. In my case I used Behringer UCA202 but there are many other alternatives available.

The audio server is called Mumble. This is a low latency Voice over IP (VoIP) server designed for gaming community but it works well for streaming audio from  KX3 to Android Phone and back. There is a great page that describes installation in more details.

I used the following commands to install mumble VoIP server

   sudo apt-get install mumble-server
   sudo dpkg-reconfigure mumble-server


This last command will present you with a few options, set these however you would like mumble to operate.

  • Autostart: I selected Yes 
  • High Priority: I selected Yes (This ensures Mumble will always be given top priority even when the Pi is under a lot of stress) 
  • SuperUser: Set the password here. This account will have full control over the server.

You need to know your IP address on Raspberry Pi 2 when configuring the Mumble client.  Write it down as you will need it shortly. In my case it was 192.168.0.47

ip addr show

You may want to edit the server configuration file. I didn't do any changes but the installation page recommends changing welcome text and server password. You can do it using this command:

sudo nano /etc/mumble-server.ini

Finally, you need to restart the server:

sudo /etc/init.d/mumble-server restart

Now that we have the mumble server running we need to install the Mumble client on Raspberry Pi 2. This can be done with this command:

sudo apt-get install mumble

Next you start the client application by typing:

mumble

This starts the mumble client. First you need to go through some configuration windows.

You need to have USB audio interface input connected to KX3 Phones output when going though the Mumble Audio Wizard. I turned the audio volume to approximately 30.



You need to select the USB Audio device as the input device. Default device is "Default ALSA device" that is onboard audio chip. When clicking Device drop down list select SysDefault card - USB Audio Codec as shown on picture below.












The drop down list might be different depending on your hardware configuration. Select the SysDefault USB device.









Once the Input and Output devices have been selected you can move forward with Next.











Next comes device tuning. I selected the longest delay for best sound quality.












Next comes Volume tuning. Make sure that KX3 audio volume is at least 30. You should see blue bar moving in sync with KX3 audio. Follow instructions.




Next comes voice activity detection setting. Follow instructions.


Next comes quality selection. I selected high as I am testing this in local LAN network.


Audio settings are now completed.

Next comes server connect. You can "Add New..." by giving the IP address that you wrote down earlier. I gave the server label "raspberrypi" and username "pi".You don't have the change the port.










When you connect to the server you should have a view like this below.
















Next step is then download mumble client on the Android phone and configure it.


CONFIGURE ANDROID PHONE 

I downloaded free mumble client called Plumble on my Android phone. You need to configure the Mumble server running on Raspberry Pi 2 on the software. Once you open Plumble client tap the "+" sign on right top corner.





















I gave the label "KX3" and IP address of the Mumble server running on Raspberry Pi 2  - in my case the IP address is 192.168.0.47.  For username I selected my ham radio call sign.


Since I did not configure any passwords on my server I left that field empty. Once the server has been added, you can try to connect to it.

OPERATION

If everything has gone well you should be able to connect to the Mumble VoIP server and hear a sound from your mobile phone.



On Raspberry Pi 2 you should see that another client "AG1LE"  has connected to the server. See example below: 















NEXT STEPS 

If you want to extend from just listening KX3 to actually working remotely you need to configure your Wifi router to enable connection remotely over the Internet. Also, the USB audio interface need to be connected to the microphone (MIC) input of KX3 radio.  KX3 must have VOX turned on to enable audio transmit.

Documenting these steps will take a bit more time, so I leave it for the next session.

 Did you find these instructions useful?  Any comments or feedback? 

73 
Mauri AG1LE








Sunday, December 27, 2015

TensorFlow: a new LSTM RNN based Morse decoder

INTRODUCTION

In my previous post I created an experiment to train a LSTM Recurrent neural network (RNN) to detect symbols from noisy Morse code. I continued experiments, but this time I used the new TensorFlow open source library for machine intelligence. The flexible architecture of TensorFlow allows to deploy computation to one or more CPUs or GPUs in a desktop, server, or mobile device with a single API.

TensorFlow was originally developed by researchers and engineers working on the Google Brain Team within Google's Machine Intelligence research organization for the purposes of conducting machine learning and deep neural networks research, but the system is general enough to be applicable in a wide variety of other domains as well.

EXPERIMENT 

I started with the TensorFlow MNIST example authored by Aymeric Damien. MNIST is a large database of handwritten digits that is commonly used for machine learning experiments and algorithm development. Instead of training a LSTM RNN model using handwritten characters I created a Python script to generate a lot of Morse code training material. I downloaded ARRL Morse training text files and created a large text file. From this text file the Python script generates properly formatted training vectors, over 155,000 of them.  The software is available as Python inotebook format in Github.

The LSTM RNN model has the following parameters:

# Parameters
learning_rate = 0.001 
training_iters = 114000 
batch_size = 126

# Network Parameters
n_input = 1 # each Morse element is normalized to dit length 1 
n_steps = 32 # timesteps (training material padded to 32 dit length)
n_hidden = 128 # hidden layer num of features 
n_classes = 60 # Morse character set 

The training takes approximately 15 minutes on my Thinkpad X301 laptop. The progress of loss function and accuracy % over the training is depicted in Figure 1 below. The final accuracy was 93.6% after 114,000 training samples.

Figure 1.  Training progress over time
























I was testing the model with generated data while adding noise gradually to signals using the "sigma" parameter on the Python scripts.  The results are below:

Test case:     QUICK BROWN FOX JUMPED OVER THE LAZY FOX 0123456789
Results:
Noise 0.0:  QUICK BROWN VOC YUMPED OVER THE LACY VOC ,12P45WOQ.
Noise 0.02: QUICK BROWN VOC YUMPED OVER THE LACY FOC 012P45WOQ.
Noise 0.05: QUICK BROWN VOC YUMPED OVER THE LACQ VOC ,,2P45WO2.
Noise 0.1:  Q5IOK BROWN FOX YUMPED O4ER THE LACY FOC 012P4FWO2,
Noise 0.2: .4IOK WDOPD VOO 2FBPIM QFEF TRE WAC2 4OX 0,.PF52Q91
As can be seen above at "sigma" level 0.2 the decoder starts to make a lot of errors.

CONCLUSIONS


The software learns the Morse code by going through the training vectors multiple times. By going through 114,000 characters in training the model achieves 96.3% accuracy. I did not try to optimize anything and I just used the reference material that came with TensorFlow library. This experiment shows that it is possible to build an intelligent Morse decoder that learns the patterns from the data and also allows to scale up more complex models with better accuracy and better tolerance for QSB and noisy signals.

TensorFlow proved to be a very powerful new machine learning library that was relatively easy to use. The biggest challenge was to figure out what data formats to use with various API calls. Due to the complexity and richness of the TensorFlow library I am fairly sure that much can be done to improve the efficiency of this software. As TensorFlow has been designed so that it works on a desktop, server, tablet or even on a mobile phone this open new possibilities to build an intelligent, learning Morse decoder for different platforms.

 73 Mauri AG1LE

Tuesday, November 24, 2015

Experiment: Deep Learning algorithm for Morse decoder using LSTM RNN

INTRODUCTION

In my previous post I created a Python script to generate training material for neural networks.
The goal is to test how well the modern Deep Learning algorithms would work in decoding noisy Morse signals with heavy QSB fading.

I did some research on various frameworks and found this article  from Daniel Hnyk. My requirements were quite similar - full Python support, LSTM RNN built-in and a simple interface.
He had selected Keras that is available in Github. There is a mailing list for Keras users that is fairly active and quite useful to find support from other users. I installed Keras on my Linux laptop and using Jupyter interactive notebooks it was easy to start experimenting with various neural network configurations.


SIMPLE RECURRENT NEURAL NETWORK EXPERIMENT

Using various sources and above mailing list I came up with the following experiment. I have uploaded the Jupyter notebook file in Github in case the reader wants to replicate the experiment.

The source code or printed output text is shown below with courier font  and I have added some commentary as well as the graphs as pictures.


In [12]:
#!/usr/bin/env python
# MorseEncoder.py  - Morse Encoder to generate training material for neural networks
# Generates raw signal waveforms with Gaussian noise and QSB (signal fading) effects
# Provides also the training target variables in separate columns. Example usage:
#
# WPM= 40 # speed 40 words per minute
# Tq = 4. # QSB cycle time in seconds (typically 5..10 secs)
# sigma = 0.02 # add some Gaussian noise
# P = signal('QUICK BROWN FOX JUMPED OVER THE LAZY FOX ',WPM,Tq,sigma)
# from matplotlib.pyplot import  plot,show,figure,legend
# from numpy.random import normal
# figure(figsize=(12,3))
# lb1,=plot(P.t,P.sig,'b',label="sig")
# lb2,=plot(P.t,P.dit,'g',label="dit")
# lb3,=plot(P.t,P.dah,'g',label="dah")
# lb4,=plot(P.t,P.ele,'m',label="ele")
# lb5,=plot(P.t,P.chr,'c',label="chr")
# lb6,=plot(P.t,P.wrd,'r*',label="wrd")
# legend([lb1,lb2,lb3,lb4,lb5,lb6])
# show()
# P.to_csv("MorseTest.csv")
#
# Copyright (C) 2015   Mauri Niininen, AG1LE
#
#
# MorseEncoder.py is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# MorseEncoder.py is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with bmorse.py.  If not, see <http://www.gnu.org/licenses/>.

import numpy as np
import pandas as pd
from numpy import sin,pi
from numpy.random import normal
pd.options.mode.chained_assignment = None  #to prevent warning messages

Morsecode = {
 '!': '-.-.--',
 '$': '...-..-',
 "'": '.----.',
 '(': '-.--.',
 ')': '-.--.-',
 ',': '--..--',
 '-': '-....-',
 '.': '.-.-.-',
 '/': '-..-.',
 '0': '-----',
 '1': '.----',
 '2': '..---',
 '3': '...--',
 '4': '....-',
 '5': '.....',
 '6': '-....',
 '7': '--...',
 '8': '---..',
 '9': '----.',
 ':': '---...',
 ';': '-.-.-.',
 '<AR>': '.-.-.',
 '<AS>': '.-...',
 '<HM>': '....--',
 '<INT>': '..-.-',
 '<SK>': '...-.-',
 '<VE>': '...-.',
 '=': '-...-',
 '?': '..--..',
 '@': '.--.-.',
 'A': '.-',
 'B': '-...',
 'C': '-.-.',
 'D': '-..',
 'E': '.',
 'F': '..-.',
 'G': '--.',
 'H': '....',
 'I': '..',
 'J': '.---',
 'K': '-.-',
 'L': '.-..',
 'M': '--',
 'N': '-.',
 'O': '---',
 'P': '.--.',
 'Q': '--.-',
 'R': '.-.',
 'S': '...',
 'T': '-',
 'U': '..-',
 'V': '...-',
 'W': '.--',
 'X': '-..-',
 'Y': '-.--',
 'Z': '--..',
 '\\': '.-..-.',
 '_': '..--.-',
 '~': '.-.-'}
    

def encode_morse(cws):
    s=[]
    for chr in cws:
        try: # try to find CW sequence from Codebook
            s += Morsecode[chr]
            s += ' '
        except:
            if chr == ' ':
                s += '_'
                continue
            print "error: '%s' not in Codebook" % chr
    return ''.join(s)



def len_dits(cws):
    # length of string in dit units, include spaces
    val = 0
    for ch in cws:
        if ch == '.': # dit len + el space 
            val += 2
        if ch == '-': # dah len + el space
            val += 4
        if ch==' ':   #  el space
            val += 2
        if ch=='_':   #  el space
            val += 7
    return val


def signal(cw_str,WPM,Tq,sigma):
    # for given CW string i.e. 'ABC ' 
    # return a pandas dataframe with signals and  symbol probabilities
    # WPM = Morse speed in Words Per Minute (typically 5...50)
    # Tq  = QSB cycle time (typically 3...10 seconds) 
    # sigma = adds gaussian noise with standard deviation of sigma to signal
    cws = encode_morse(cw_str)
    #print cws
    # calculate how many milliseconds this string will take at speed WPM
    ditlen = 1200/WPM # dit length in msec, given WPM
    msec = ditlen*(len_dits(cws)+7)  # reserve +7 for the last pause
    t = np.arange(msec)/ 1000.       # time array in seconds
    ix = range(0,msec)               # index for arrays

    # Create a DataFrame and initialize
    col =["t","sig","dit","dah","ele","chr","wrd","spd"]
    P = pd.DataFrame(index=ix,columns=col)
    P.t = t              # keep time  
    P.sig=np.zeros(msec) # signal stored here
    P.dit=np.zeros(msec) # probability of 'dit' stored here
    P.dah=np.zeros(msec) # probability of 'dah' stored here
    P.ele=np.zeros(msec) # probability of 'element space' stored here
    P.chr=np.zeros(msec) # probability of 'character space' stored here
    P.wrd=np.zeros(msec) # probability of 'word space' stored here
    P.spd=np.ones(msec)*WPM #speed stored here 

    
    #pre-made arrays with multiple(s) of ditlen
    z = np.zeros(ditlen) 
    z2 = np.zeros(2*ditlen)
    z4 = np.zeros(4*ditlen)
    dit = np.ones(ditlen)
    dah = np.ones(3*ditlen)
      
    # For all dits/dahs in CW string generate the signal, update symbol probabilities
    i = 0
    for ch in cws:
        if ch == '.':
            dur = len(dit)
            P.sig[i:i+dur] = dit
            P.dit[i:i+dur] = dit
            i += dur
            dur=len(z)
            P.sig[i:i+dur] = z
            P.ele[i:i+dur] = np.ones(dur)
            i += dur

        if ch == '-':
            dur = len(dah)
            P.sig[i:i+dur] = dah
            P.dah[i:i+dur]=  dah
            i += dur            
            dur=len(z)
            P.sig[i:i+dur] = z
            P.ele[i:i+dur] = np.ones(dur)
            i += dur

        if ch == ' ':
            dur = len(z2)
            P.sig[i:i+dur] = z2
            P.chr[i:i+dur]=  np.ones(dur)
            i += dur
        if ch == '_':
            dur = len(z4)
            P.sig[i:i+dur] = z4
            P.wrd[i:i+dur]=  np.ones(dur)
            i += dur
    if Tq > 0.:  # QSB cycle time impacts signal amplitude
        qsb = 0.5 * sin((1./float(Tq))*t*2*pi) +0.55
        P.sig = qsb*P.sig
    if sigma >0.:
        P.sig += normal(0,sigma,len(P.sig))
    return P
In [13]:
print ('MorseEncoder started')
%matplotlib inline
from matplotlib.pyplot import  plot,show,figure,legend, title
from numpy.random import normal
WPM= 40
Tq = 1.8 # QSB cycle time in seconds (typically 5..10 secs)
sigma = 0.01 # add some Gaussian noise
P = signal('QUICK',WPM,Tq,sigma)
figure(figsize=(12,3))
lb1,=plot(P.t,P.sig,'b',label="sig")
title("QUICK in Morse code - (c) 2015 AG1LE")
legend([lb1])
show()
print ('MorseEncoder finished. %d datapoints created' % len(P.sig)) 

MorseEncoder started

The Jupyter notebook will plot this graph that basically shows the text 'QUICK' converted to noisy signal with strong QSB fading.  This signal goes down close to zero between letters C and K as you can see below.  


Figure 1.  The training signal containing noise and QSB fading
The next  section of the code imports some libraries (including Keras) that is used for Neural Network experimentation. I am also preparing the data to the proper format that Keras requires. 


MorseEncoder finished. 1950 datapoints created
In [14]:
# Time Series Testing - Morse case
import keras.callbacks
from keras.models import Sequential  
from keras.layers.core import Dense, Activation, Dense, Dropout
from keras.layers.recurrent import LSTM

import random
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline

# Data preparation 
# use 100 examples of data to predict nb_samples (850) in the future
samples = 1950
examples = 1000
y_examples = 100

x = np.linspace(0,1950,samples)
nb_samples = samples - examples - y_examples
data = P.sig

# prepare input for RNN training  - 1 feature
input_list = [np.expand_dims(np.atleast_2d(data[i:examples+i]), axis=0) for i in xrange(nb_samples)]
input_mat = np.concatenate(input_list, axis=0)
lb1,=plot(x,data,label="input")
lb2,=plot(x,P.dit,label="target")
legend([lb1,lb2])
title("training input and target data")
Out[14]:
<matplotlib.text.Text at 0x10c119b50>


This graph shows the training data (the noisy, fading signal) and the target data (I selected 'dits' in this example). This is just to verify that I have the right datasets selected. 


Figure 2.  Training and target data 

In the following sections I prepare the training target ('dits') to proper format and setup the neural network model.  I am using LSTM with Dropout and the model has 300 hidden neurons.  I have also a callback function defined to capture the loss data during the training so that I can plot the loss curve to see the training progress.  

In [15]:
# prepare target - the first column in merged dataframe
ydata = P.dit
target_list = [np.atleast_2d(ydata[i+examples:examples+i+y_examples]) for i in xrange(nb_samples)]
target_mat = np.concatenate(target_list, axis=0)

# set up a model
trials = input_mat.shape[0]
features = input_mat.shape[2]
hidden = 300

model = Sequential()
model.add(LSTM(input_dim=features, output_dim=hidden,return_sequences=False))
model.add(Dropout(.2))
model.add(Dense(input_dim=hidden, output_dim=y_examples))
model.add(Activation('linear'))
model.compile(loss='mse', optimizer='rmsprop')

# Call back to capture losses 
class LossHistory(keras.callbacks.Callback):
    def on_train_begin(self, logs={}):
        self.losses = []

    def on_batch_end(self, batch, logs={}):
        self.losses.append(logs.get('loss'))
# Train the model
history = LossHistory()
model.fit(input_mat, target_mat, nb_epoch=100,callbacks=[history])

# Plot the loss curve 
plt.plot( history.losses)
title("training loss")

Here I have started the training. I selected 100 epochs - this means that the software will go through the training material  for 100 times during the training.  As you can see this goes very quickly - with larger model or larger datasets the training might take minutes to hours per epoch. We have a very small model and small dataset here. 

Epoch 1/100
850/850 [==============================] - 0s - loss: 0.1050     
Epoch 2/100
850/850 [==============================] - 0s - loss: 0.0927     
Epoch 3/100
850/850 [==============================] - 0s - loss: 0.0870     
Epoch 4/100
850/850 [==============================] - 0s - loss: 0.0823     
Epoch 5/100
850/850 [==============================] - 0s - loss: 0.0788     
Epoch 6/100
850/850 [==============================] - 0s - loss: 0.0756     
Epoch 7/100
850/850 [==============================] - 0s - loss: 0.0724     
Epoch 8/100
850/850 [==============================] - 0s - loss: 0.0693     
Epoch 9/100
850/850 [==============================] - 0s - loss: 0.0668     
Epoch 10/100
850/850 [==============================] - 0s - loss: 0.0639     
Epoch 11/100
850/850 [==============================] - 0s - loss: 0.0611     
Epoch 12/100
850/850 [==============================] - 0s - loss: 0.0586     
Epoch 13/100
850/850 [==============================] - 0s - loss: 0.0561     
Epoch 14/100
850/850 [==============================] - 0s - loss: 0.0539     
Epoch 15/100
850/850 [==============================] - 0s - loss: 0.0519     
Epoch 16/100
850/850 [==============================] - 0s - loss: 0.0495     
Epoch 17/100
850/850 [==============================] - 0s - loss: 0.0476     
Epoch 18/100
850/850 [==============================] - 0s - loss: 0.0456     
Epoch 19/100
850/850 [==============================] - 0s - loss: 0.0441     
Epoch 20/100
850/850 [==============================] - 0s - loss: 0.0430     
Epoch 21/100
850/850 [==============================] - 0s - loss: 0.0411     
Epoch 22/100
850/850 [==============================] - 0s - loss: 0.0400     
Epoch 23/100
850/850 [==============================] - 0s - loss: 0.0387     
Epoch 24/100
850/850 [==============================] - 0s - loss: 0.0378     
Epoch 25/100
850/850 [==============================] - 0s - loss: 0.0370     
Epoch 26/100
850/850 [==============================] - 0s - loss: 0.0356     
Epoch 27/100
850/850 [==============================] - 0s - loss: 0.0350     
Epoch 28/100
850/850 [==============================] - 0s - loss: 0.0340     
Epoch 29/100
850/850 [==============================] - 0s - loss: 0.0334     
Epoch 30/100
850/850 [==============================] - 0s - loss: 0.0328     
Epoch 31/100
850/850 [==============================] - 0s - loss: 0.0322     
Epoch 32/100
850/850 [==============================] - 0s - loss: 0.0317     
Epoch 33/100
850/850 [==============================] - 0s - loss: 0.0309     
Epoch 34/100
850/850 [==============================] - 0s - loss: 0.0302     
Epoch 35/100
850/850 [==============================] - 0s - loss: 0.0299     
Epoch 36/100
850/850 [==============================] - 0s - loss: 0.0296     
Epoch 37/100
850/850 [==============================] - 0s - loss: 0.0290     
Epoch 38/100
850/850 [==============================] - 0s - loss: 0.0285     
Epoch 39/100
850/850 [==============================] - 0s - loss: 0.0283     
Epoch 40/100
850/850 [==============================] - 0s - loss: 0.0277     
Epoch 41/100
850/850 [==============================] - 0s - loss: 0.0272     
Epoch 42/100
850/850 [==============================] - 0s - loss: 0.0268     
Epoch 43/100
850/850 [==============================] - 0s - loss: 0.0265     
Epoch 44/100
850/850 [==============================] - 0s - loss: 0.0258     
Epoch 45/100
850/850 [==============================] - 0s - loss: 0.0256     
Epoch 46/100
850/850 [==============================] - 0s - loss: 0.0253     
Epoch 47/100
850/850 [==============================] - 0s - loss: 0.0251     
Epoch 48/100
850/850 [==============================] - 0s - loss: 0.0248     
Epoch 49/100
850/850 [==============================] - 0s - loss: 0.0246     
Epoch 50/100
850/850 [==============================] - 0s - loss: 0.0241     
Epoch 51/100
850/850 [==============================] - 0s - loss: 0.0236     
Epoch 52/100
850/850 [==============================] - 0s - loss: 0.0233     
Epoch 53/100
850/850 [==============================] - 0s - loss: 0.0234     
Epoch 54/100
850/850 [==============================] - 0s - loss: 0.0230     
Epoch 55/100
850/850 [==============================] - 0s - loss: 0.0229     
Epoch 56/100
850/850 [==============================] - 0s - loss: 0.0224     
Epoch 57/100
850/850 [==============================] - 0s - loss: 0.0223     
Epoch 58/100
850/850 [==============================] - 0s - loss: 0.0218     
Epoch 59/100
850/850 [==============================] - 0s - loss: 0.0218     
Epoch 60/100
850/850 [==============================] - 0s - loss: 0.0215     
Epoch 61/100
850/850 [==============================] - 0s - loss: 0.0215     
Epoch 62/100
850/850 [==============================] - 0s - loss: 0.0212     
Epoch 63/100
850/850 [==============================] - 0s - loss: 0.0208     
Epoch 64/100
850/850 [==============================] - 0s - loss: 0.0209     
Epoch 65/100
850/850 [==============================] - 0s - loss: 0.0207     
Epoch 66/100
850/850 [==============================] - 0s - loss: 0.0205     
Epoch 67/100
850/850 [==============================] - 0s - loss: 0.0203     
Epoch 68/100
850/850 [==============================] - 0s - loss: 0.0200     
Epoch 69/100
850/850 [==============================] - 0s - loss: 0.0200     
Epoch 70/100
850/850 [==============================] - 0s - loss: 0.0197     
Epoch 71/100
850/850 [==============================] - 0s - loss: 0.0197     
Epoch 72/100
850/850 [==============================] - 0s - loss: 0.0198     
Epoch 73/100
850/850 [==============================] - 0s - loss: 0.0193     
Epoch 74/100
850/850 [==============================] - 0s - loss: 0.0191     
Epoch 75/100
850/850 [==============================] - 0s - loss: 0.0189     
Epoch 76/100
850/850 [==============================] - 0s - loss: 0.0188     
Epoch 77/100
850/850 [==============================] - 0s - loss: 0.0189     
Epoch 78/100
850/850 [==============================] - 0s - loss: 0.0185     
Epoch 79/100
850/850 [==============================] - 0s - loss: 0.0185     
Epoch 80/100
850/850 [==============================] - 0s - loss: 0.0184     
Epoch 81/100
850/850 [==============================] - 0s - loss: 0.0183     
Epoch 82/100
850/850 [==============================] - 0s - loss: 0.0181     
Epoch 83/100
850/850 [==============================] - 0s - loss: 0.0180     
Epoch 84/100
850/850 [==============================] - 0s - loss: 0.0179     
Epoch 85/100
850/850 [==============================] - 0s - loss: 0.0177     
Epoch 86/100
850/850 [==============================] - 0s - loss: 0.0177     
Epoch 87/100
850/850 [==============================] - 0s - loss: 0.0174     
Epoch 88/100
850/850 [==============================] - 0s - loss: 0.0177     
Epoch 89/100
850/850 [==============================] - 0s - loss: 0.0175     
Epoch 90/100
850/850 [==============================] - 0s - loss: 0.0173     
Epoch 91/100
850/850 [==============================] - 0s - loss: 0.0172     
Epoch 92/100
850/850 [==============================] - 0s - loss: 0.0171     
Epoch 93/100
850/850 [==============================] - 0s - loss: 0.0171     
Epoch 94/100
850/850 [==============================] - 0s - loss: 0.0167     
Epoch 95/100
850/850 [==============================] - 0s - loss: 0.0167     
Epoch 96/100
850/850 [==============================] - 0s - loss: 0.0170     
Epoch 97/100
850/850 [==============================] - 0s - loss: 0.0164     
Epoch 98/100
850/850 [==============================] - 0s - loss: 0.0166     
Epoch 99/100
850/850 [==============================] - 0s - loss: 0.0163     
Epoch 100/100
850/850 [==============================] - 0s - loss: 0.0164     
Out[15]:
<matplotlib.text.Text at 0x11e055350>

The following graph shows the training loss during the training process. This gives you an idea whether the training is progressing well or if you have some problem with the model or the parameters. 
Figure 3.  Training loss curve





















In [16]:
# Use training data to check prediction
predicted = model.predict(input_mat)
In [17]:
# Plot original data (green) and predicted data (red)
lb1,=plot(data,'g',label="training")
#lb2,=plot(ydata,'b',label="target")
lb3,=plot(xrange(examples,examples+nb_samples), predicted[:,1],'r',label="predicted")
legend([lb1,lb3])
title("training vs. predicted")
Out[17]:
<matplotlib.text.Text at 0x11f164610>

In this section I am checking the model prediction. Since I am using the training material this is supposed to show a good result if the training was successful.  As you can see from figure 4. below the predicted graph (red color)  is aligned with 'dits' in the training signal (green color) despite QSB fading and noise in the signal.  
Figure 4.  Training vs. predicted graph

In the following section I will create another Morse signal, this time with text 'KCIUQ' but using the same noise, QSB and speed parameters.  I am planning to use this signal to validate how well the model has generalized the 'dit' concept.  

In [18]:
# Let's change the input signal, instead of QUICK we have KCIUQ in Morse code 
P = signal('KCIUQ',WPM,Tq,sigma)
data = P.sig

# prepare input - 1 feature
input_list = [np.expand_dims(np.atleast_2d(data[i:examples+i]), axis=0) for i in xrange(nb_samples)]
input_mat = np.concatenate(input_list, axis=0)
plt.plot(x,data)
Out[18]:
[<matplotlib.lines.Line2D at 0x136050f90>]

Here is the generated validation Morse signal.  It has the same letter as before but in reverse order. Can you read letters 'KCIUQ' from the graph below?


Figure 5.  Validation Morse signal

In this section I use the above validation signal to create a prediction and the plot the results.  

In [19]:
predicted = model.predict(input_mat)
plt.plot(data,'g')
plt.plot(xrange(examples,examples+nb_samples), predicted[:,1],'r')
Out[19]:
[<matplotlib.lines.Line2D at 0x1217be9d0>]

As you can see from the graph below the predicted 'dit' symbols (red color)  don't really line up with actual 'dits' in the signal (green color). This is not a surprise to me.  To build a good model that can generalize the learning you need to have a lot of training material (typically millions of datapoints) and the model needs to have enough neural nodes to capture the details of the underlying signals.  
In this simple experiment I had only 1950 datapoints and 300 hidden nodes. There are only 8  'dit' symbols in the training material - learning CW skill  well requires a lot more material and many repetitions, as any human who has gone through the process can testify. Same principle applies for neural networks.  
Figure 6.  Validation test 



























CONCLUSIONS 

In this experiment I built a proof of concept to test whether Recurrent Neural Networks (especially LSTM variant) could be used to learn to detect symbols from noisy Morse code that has deep QSB fading.  This experiment may contain errors and misunderstandings from my part as I have only had a few hours to play with this Keras Neural Network framework. Also, the concept itself needs still more validation as I may have used the framework incorrectly.

I think that the results look quite promising.  In only 100 epochs the RNN model learned 'dits' from the noisy signal and was able to separate them from 'dah' symbols.  As the validation test shows I overfitted the model to this small sample of training material used in the experiment.  It will take much more training data and larger, more complicated neural network to learn to generalize the symbols in Morse code.  The training process may also need more computing capacity. It might be beneficial to have a graphics card with GPU to speed up the training process going forward.

Any comments or feedback?

73
Mauri AG1LE



Popular Posts