Sunday, July 4, 2021

Cloudberry Live - listen your rig from everywhere with Web UI using Raspberry Pi

Overview 

I wanted to have a fully open source solution to listen my radio(s) from my mobile phone and laptop over the web using a low cost Raspberry Pi as the rig control server.  While there are many different remote station solutions out there I could not find one that would just work with a normal web browser (Chrome, Safari, etc) and without doing complicated network configurations exposing your internal  WiFi network via a router.  Also,  I wanted to have the solution that is really easy to install to Raspberry Pi and update new versions as new features get added to the software.  

I revisited the KX3 remote control project I did in Feb 2016 and started a new Cloudberry Live project. Cloudberry Live has several new improvements, such as no need to install Mumble client on your devices - you can just listen your radio(s) using a regular web browser.  I did also upgrade my Amazon Alexa skill to leverage the ability to stream audio to Amazon Echo devices and control the frequency using voice commands. 

Here is a short demo video how Cloudberry.live works: 






Features 

  • Listen your radio using web streaming from anywhere.
  • Web UI that works with mobile,  tablet and laptop browsers (Chrome and Safari tested) 
  • View top 10 DX cluster spots, switch the radio to the frequency with one click. 
The software is currently at alpha stage - all the parts are working as shown in the demo above but need refactoring and general clean-up.   The cloudberry.live proxy service is currently using a 3rd party open source proxy provider jprq.  My plan is to host a reverse proxy myself in order to simplify the installation process.  

The software is written using Python Flask framework and bash scripts. The deployment to Raspberry Pi is done using Ansible playbook that configures the environment correctly.  I am using NGINX webserver to serve the web application.  

The audio streaming portion is using HTTP Live Streaming (HLS) protocol and ffmpeg is used to stream audio from ALSA port and encode it using AAC format. There is a python http.server on port 8000 serving HLS traffic.  I have tested Safari and Chrome browsers to be able to stream HLS audio.  Chrome requires Play HLS M3u8 extension to be installed.

The home screen is shown below.  This gives you the top 10 spots and a link to open audio streaming window.  By clicking the frequency link on the freq column the server sends hamlib commands to the radio to set the frequency and mode.  Only USB and LSB modes are supported in the current software version.



The Tune screen is shown below.  This is still works-in-progress and needs some polishing.  The Select Frequency allows to enter the frequency using numbers. The VFO range bar allows to change the radio frequency by dragging the green selection bar.   The band selection buttons don't do anything at the moment. 



The Configure Rig screen allows you to select your rig from the list of hamblib supported radios. I am using ICOM IC-7300 that is currently the default setting. 


The Search button on the menu bar allows to check call sign from hamdb.org database. A pop-up window will show the station details:




Amazon Alexa Skill

I created a new Alexa Skill  Cloudberry Live  (not published yet) that uses the web API interface for selecting the frequency based on DX cluster spots and the HLS streaming to listen your radio.  While the skill is currently using only my station,  my goal would be to implement some sort of registration process so that Alexa users  would have more choice to listen ham radio traffic from DX stations around the world using Cloudberry.live software. 

This would give an opportunity also for people with disabilities to enjoy listening HF bands using voice controlled, low cost ($20 - $35) smart speakers.  By keeping your radio (Raspberry Pi server) online you could help to grow the ham community. 

Installation

I have posted the software to Github in a private repo.  The software will have the following key features
  • One step software installation to Raspberry Pi using Ansible playbooks.
  • Configure your radio using Hamlib 
  • Get your personalized Cloudberry.live weblink
I have been developing cloudberry.live  on my Macbook Pro and pushing new versions to RaspBerry Pi server downstairs where my IC-7300 is located. Typical  Ansible playbook update takes about 32 seconds (this includes restarting the services).  I can see the access and error logs on the server using SSH consoles - this makes debugging quite easy.  

Questions? 

I am looking for collaborators to work with me on this project.  If you are interested in open source web development using Python Flask framework let me know by posting a comment below. 


73  de 
Mauri AG1LE


Wednesday, March 31, 2021

New exciting Digital Mode CT8 for ham radio communications

April 1, 2021 

Overview 

CT8 is a new exciting digital mode designed for interactive ham radio communication where signals may be weak and fading, and openings may be short.  

A beta release of CT8 offers sensitivity down to –48 dB on the AWGN channel, and DX contacts with 4 times longer distance than FT8. An auto-sequencing feature offers the option to respond automatically to the first decoded reply to your CQ. 

The best part of this new mode is that it is easy to learn how to decode in your head, thus no decoder software is needed. Alpha users of CT8 mode report that learning to decode CT8 is ten times easier than Morse code.  For those who rather use a computer, an open source Tensorflow based Machine Learning decoder software is included in this beta release. 

CT8 is based on novel avian vocalization encoding scheme.  The character combinations were designed to be very easily recognizable to leverage existing QSO practices in the communication modes like CW.  

Below is an example audio clip on how to establish a CT8 contact - the message format should be familiar  to anybody who have listened Morse code in ham radio bands before.  

Listen to the "CQ CQ DE AG1LE K"  - the audio has rich syllabic tonal and harmonic features that are very easy to recognize even under noisy band conditions. 

Fig 1. below shows the corresponding spectrogram. Notice the harmonic spectral features that ensure accurate symbol decoding and provide high sensitivity and tolerance against rapid fading, flutter and QRM.

Fig 1. CT8 spectrogram - CQ CQ CQ DE AG1LE K







The audio clip sample may sound a bit like a chicken.  This is actually a key feature of avian vocalization encoding.   

Scientific Background 

The idea behind CT8 mode is not new.  There is a lot of research done on avian vocalizations over the past hundred years. From late 1990s digital signal processing software has become widely available and vocal signals can be analyzed using sonograms and spectrograms with a personal computer.

In research article [1] Dr.  Nicholas Collias  described sound spectrograms of 21 of the 26 vocal signals in the extensive vocal repertoire of the African Village Weaver (Ploceus cucullatus). A spectrographic key to vocal signals helps make these signals comparable for different investigators. Short-distance contact calls are given in favorable situations and are generally characterized by low amplitude and great brevity of notes. Alarm cries are longer, louder, and often strident calls with much energy at high frequencies, whereas threat notes, also relatively long and harsh, emphasize lower frequencies. 

In a very interesting research article [2] by Kevin G. McCracken and Frederick H. Sheldon conclude that the characters most subject to ecological convergence, and thus of least phylogenetic value, are first peak-energy frequency and frequency range, because sound penetration through vegetation depends largely on frequency. The most phylogenetically informative characters are number of syllables, syllable structure, and fundamental frequency, because these are more reflective of behavior and syringeal structure. In the figure below give details about Heron phylogeny, corresponding spectrograms, vocal characters, and habitat distributions. 




Habitat distributions suggest that avian species that inhabit open areas such as savannas, grasslands, and open marshes have higher peak-energy (J) frequencies (kHz) and broader frequency ranges (kHz) than do taxa inhabiting closed habitats such as forests. Number of syllables is the number most frequently produced. 

Ibises, tiger-herons, and boat-billed herons emit a rapid series of similar syllables; other heron vocalizations generally consist of singlets, doublets, or triplets. Syllabic structure may be tonal (i.e., pure whistled notes) or harmonic (i.e., possessing overtones; integral multiples of the base frequency). Fundamental frequency (kHz) is the base frequency of a syllable and is a function of syringeal morphology. 

These vocalization features can be used for training modern machine learning algorithms. In fact, in a series of studies published [3] between 2014 and 2016, Georgia Tech research engineer Wayne Daley and his colleagues exposed groups of six to 12 broiler chickens to moderately stressful situations—such as high temperatures, increased ammonia levels in the air and mild viral infections—and recorded their vocalizations with standard USB microphones. They then fed the audio into a machine learning program, training it to recognize the difference between the sounds of contented and distressed birds.   According the Scientific American article [4] Carolynn “K-lynn” Smith, a biologist at Macquarie University in Australia and a leading expert on chicken vocalizations, says that although the studies published so far are small and preliminary, they are “a neat proof of concept” and “a really fascinating approach.”

What does CT8 stand for? 

Building on this solid scientific foundation it is easy to imagine very effective communication protocols that are based on millions of years of evolution of various avian species. After all,  birds are social animals and have very expressive and effective communication protocols, whether to warn others about approaching predator  or to invite flock members to join feasting on a corn field.  

Humans have domesticated several avian species and have been living with species like chicken (Gallus gallus domesticus) for over 8000 years.  Therefore CT8 mode sounds inherently natural to humans and it is much easier to learn to decode than Morse code based on extensive alpha testing performed by the development team.  


CT8 stands for "Chicken Talk" version 8  -- over a year of development effort and seven previous encoding versions tested over difficult band conditions, and with hundreds of Machine Learning models trained, the software development team has finally been able to release CT8  digital mode. 

Encoding Scheme 

From ham radio perspective the frequency range of these avian vocalizations is below 4 kHz in most cases.  This makes it possible to use existing SSB or FM transceivers without any modifications, other than perhaps adjustment of the filter bandwidth available in modern rigs.  The audio sampling rate used in this project was 8 kHz, so the original audio source files  were re-sampled  using a Linux command line tool: 

sox  -b16 -c 1 input.wav output.wav  rate 8000

The encoding scheme for the CT8 mode was done by collecting various free audio sources of chicken sounds and carefully assembling vowels, plosives, fricatives and nasals using this resource as the model. Free open source cross-platform audio software Audacity was used to extract vocalizations using the spectrogram view and also creating labeled audio files.

Figure 3. below shows a sample audio file with assigned character labels. 

Fig 3. Labeled vocalizations using Audacity software











CT8 Software

The encoder software is written in C++ and Python and runs on Windows, OSX, and Linux.  The sample decoder is made available from Github as open source software, if there is enough interest on this novel communication mode from the ham radio community.    

For the CT8 decoder a  Machine Learning based decoder  software was built on top of open source Tensorflow framework.  The decoder was trained on short 4 second audio clips and in the experiments character error rate  0.1% and word accuracy of 99.5% was achieved.  With more real-world training material the ML model is expected to achieve even better decoding accuracy. 

Future Enhancements

CT8  opens a new era for ham radio communication protocol development using biomimetics principles.  Adding new phonemes using the principles  of ecological signals as described in article [2]  can open up things like "DX mode" for long distance communication.  For example the vocalizations of Cetaceans (whales) could be also used to build a new phoneme map for DX contacts - some of the lowest frequency whale sounds can travel through the ocean as far as 10,000 miles without losing their energy.  


73  de AG1LE 


PS. If you made it down here, I hope that you enjoyed this figment of my imagination and I wish you a very happy April 1st.


References

[1] Nicholas E. Collias,  Vocal Signals of the Village Weaver: A Spectrographic Key and the Communication Code

[2]  Kevin G. McCracken and Frederick H. Sheldon, Avian vocalizations and phylogenetic signal

[3] Wayne Daley, et al Identifying rale sounds in chickens using audio signals for early disease detection in poultry

[4] Scientific American, Ferris Jabr, Fowl Language: AI Decodes the Nuances of Chicken “Speech”


Popular Posts