Streaming wideband data using Cyan

1. Introduction

The purpose of this application note is to describe the mechanism used to stream wideband data streams from Cyan Rx channels to the FPGA. This application note assumes that you’ve purchased a complete host machine appropriate for your specific application. Per Vices recording solutions presently provide support for capturing instantaneous RF streams of between 4-24GHz. For more information, including upcoming support for wider capture bandwidths, please contact us.

2. Requirements

The following instructions assume a host machine provisioned and configured dual NICs (Napatech NT200A02-SCC SmartNIC), and sufficient fast NVMe storage and bulk storage.

It refers to the following files;

3. Instructions

The wideband capturing of streams is implemented in two parts; first, we capture the packet stream from the Radio and record it to fast storage. Then, we parse the pcapng files to extract payload data for storage and display.

a. Start streaming

To begin streaming run rx_start in uhd examples. On ubuntu it will be located in /lib/uhd/examples for the version of uhd installed in the system. If you are using a locally compiled version the examples will be found in host/build/examples. Use –help for an explaination of the arguments. Example command for streaming on channels a, c, h:

./rx_start --dsp-freq 100000000 --lo-freq 600000000 --rate 10000000 --channels 0,2,7
or for all channels:
./rx_start --dsp-freq 100000000 --lo-freq 600000000 --rate 10000000 --channels 0,1,2,3,4,5,6,7
–dsp-freq adjusts the cordic shift, –lo-freq adjusts the lo shift, and –rate adjusts the sample rate. dsp-freq can be negative. If the command does not work correctly try running the command to stop streaming before trying again.

b. Capture stream

The sdr2disk script starts n2disk threads to capture the information being streamed to the sfp ports. Each n2disk thread has its own RAID controller to save the streamed data to, which can be found at /storage0, /storage1, /storage2 and /storage3. The n2disk settings can be tweaked to allow for different file sizes and setting a capture limit. To capture the data streaming off cyan, run sdr2disk as super user (ie. sudo ./sdr2disk) which is generally located in ~/scripts/sdr2disk, specifying the ports you want to capture from, duration of capture and filename for the capture:

Usage : ./sdr2disk -n [sfpA &| sfpB &| sfpC &| sfpD] -t [CAPTURE TIME IN SECONDS] -o [FILENAME]

Any combination of ports can be used seperated by commas, please make sure you have enough storage for the specified capture time, filename will be followed by date and time of run.
Examples:
         ./sdr2disk.sh -p sfpA,sfpB,sfpC,sfpD -t 10 -o run1

         ./sdr2disk.sh -p sfpA,sfpB,sfpD -t 100 -o run2

         ./sdr2disk.sh -p sfpC,sfpD -t 500 -o run3

         ./sdr2disk.sh -p sfpA -t 1000 -o run4
Note 1: sfpA connects to ntxs0, sfpB connects to ntxs1, sfpC connects to ntxs2, sfpD connects to ntxs3.

Note 2: The captures are currently stored in sfpA:/storage0/storage, sfpB:/storage1/storage, sfpC:/storage2/storage, sfpD:/storage3/storage, under the most recent time stamped directory. This can be modified in the sdr2disk script.

c. Stops streaming

To stop streaming run rx_stop (located in /lib/uhd/examples, the same as rx_start), specifying the channels you want to stop. For example to stop streaming on a,c,h you would use the command:

./rx_stop --channels 0,2,7
or for all channels:
./rx_stop --channels 0,1,2,3,4,5,6,7

d. Post processing

To process the packets and extract the samples from the pcap files, run parse_pcap.sh (also installed in the ~/scripts/sdr2disk), specifying the complete location of the file, the destination address, port number, channel letter and output file header. The following examples show which destination addresses, ports and channel letters correspond to each other:

Please provide; 
        1. Pcap or Pcapng file ( Provide complete location ie: /storage0/storage/1629230682.356592/1629230684.105763.pcap )
        2. Destination Address ( Channels A & B: 10.10.10.10 , Channels C & D: 10.10.11.10, Channels E & F: 10.10.12.10, Channels G & H: 10.10.13.10 )
        3. Desired Port number ( Channel A: 42836, Channel B: 42837, Channel C: 42838, Channel D: 42839, Channel E: 42840, Channel F: 42841, Channel G: 42842, Channel H: 42843 )
        4. Channel letter ( A, B, C, D, E, F, G, H )
        5. File header
        Example: 
                bash parse_pcap.sh filename 10.10.10.10 42836 A test0

                bash parse_pcap.sh filename 10.10.10.10 42837 B test1

                bash parse_pcap.sh filename 10.10.11.10 42838 C test2

                bash parse_pcap.sh filename 10.10.11.10 42839 D test3

                bash parse_pcap.sh filename 10.10.12.10 42840 E test4

                bash parse_pcap.sh filename 10.10.12.10 42841 F test5

                bash parse_pcap.sh filename 10.10.13.10 42842 G test6

                bash parse_pcap.sh filename 10.10.13.10 42843 H test7
Note 1: If the script returns a message saying there exists no packets with the specified address and port, please double check the pcap file in wireshark to ensure you are trying to extract the right channels from the data capture.

Note 2: The binary values and an accompanying graph of the first 5000 samples will be found under ~/sdr2disk/bin_val_files

4. Example Data Capture

In this example, we will describe how to capture a single 500MHz stream consistening of a 10MHz sine wave, and then display it in gnradio.

Summary

As a quick start, to capture 50MHz of complex data on channels 0,1,2 at a center frequency of 701MHz, type the following command;

/lib/uhd/examples/rx_start --dsp-freq 25000000 --lo-freq 725000000 --rate 50000000 --gain 64 --channels 0,1,2

Capture Program

In this example, we are capturing from channels 0,1,2, and configuring the capture at 50MSPS, and saving it to the fast storage. Channels 0,1,2 correspond to sfpA and sfpB, in this example we are capturing 10 seconds of data, but the capture time is totally dependent on how much storage you have.

/home/jade/scripts/sdr2disk/sdr2disk.sh -p sfpA,sfpB -t 10 -o run11

After the stream has been successfully captured by the sdr2disk script, you can turn it off using the following command:

/lib/uhd/examples/rx_stop --channels 0,1,2

Extracting Program

To support the fast throughputs, we save the payload data as pcapng files using the above script and use a seperate script to extract the samples after capture using parse_pcap.sh. This program parses the pcapfiles into complex binary files. The pcap files can be found in directories storage0/storage for sfpA, storage1/storage for sfpB, storage2/storage for sfpC, storage3/storage for sfpD. This example is extracting the channel A data from a pcap file. If streaming from channels A and B, the script must be run twice using the corresponding port to extract each channel.

cd /home/jade/scripts/sdr2disk/
bash parse_pcap.sh /storage0/storage/TIME/TIME.pcap 10.10.10.10 42836 A test0A
bash parse_pcap.sh /storage0/storage/TIME/TIME.pcap 10.10.10.10 42837 B test0B
bash parse_pcap.sh /storage1/storage/TIME/TIME.pcap 10.10.11.10 42838 C test0C
bash parse_pcap.sh /storage1/storage/TIME/TIME.pcap 10.10.11.10 42838 D test0D
bash parse_pcap.sh /storage2/storage/TIME/TIME.pcap 10.10.12.10 42838 E test0E
bash parse_pcap.sh /storage2/storage/TIME/TIME.pcap 10.10.12.10 42838 F test0F
bash parse_pcap.sh /storage3/storage/TIME/TIME.pcap 10.10.13.10 42838 G test0G
bash parse_pcap.sh /storage3/storage/TIME/TIME.pcap 10.10.14.10 42838 H test0H

Note: you shall have to modify this command to the timestamp corresponding to your capture.

Data Visualization

The parse_pcap.sh calls a plot_vita_pcapng.py python script which extracts the binary values and plots the first 5000 samples.

4. Common Errors

If you run into pf_ring errors, please update the kernel module using the following commands, or refer to the instructions at https://www.ntop.org/guides/pf_ring/get_started/git_installation.html.

To manually update pf_ring,

git clone https://github.com/ntop/PF_RING.git
cd PF_RING/kernel
sudo make
sudo make install
In the event of version mismatches, you may also be required to update ntopng, following the instructions available at: https://packages.ntop.org/apt-stable/

By default, you can also run the following;

wget https://packages.ntop.org/apt-stable/20.04/all/apt-ntop-stable.deb
sudo apt install ./apt-ntop-stable.deb

If you run into NetRxOpen errors about merging traffic on the interface, make sure netctl is stopped for the napatech interfaces using the following commands:

sudo netctl stop ntxs0
sudo netctl stop ntxs1
sudo netctl stop ntxs2
sudo netctl stop ntxs3

If you run into X11 connection errors, please add the following line to your environment or run the command in the terminal you are using:

 export XAUTHORITY=$HOME/.Xauthority

5. Notes

When using this program in production, note that storing large amounts of data >12min, will cause VERY HEAVY NVMe drive usage. As throughput is a critical system requirement, the NVMe drives shipped with our host machines are heavily optimized for throughput, as opposed to reliability.

This is because the underlying system architecture needs to balance the available PCIe lanes that are connnected to each CPU with the largest possible storage and throughput. Based on extensive benchmarking, we’ve observed that enterprise NVMe drives, though providing substantially greater reliability and designed to sustain a high number of daily full disk writes, simply do not have the absolute performance requirement necessary to sustain the required disk throughput when streaming.

As a consequence of this design requirement, we strongly urge customers to treat the included NVMe drives as consumable items. Consider that a 10minute capture on two 40Gbps channels effectively consists of an entire full disk write. As a ballpark figure, consider that consumer NVMe drives generally aim to support, on average, around 600 FTW over their lifetime. However, as host machines generally require between 16-32 such SSD drives, the statistical likelyhood of any one of those NVMe drives failing is correspondingly higher. Thus, we suggest that customers replace all NVMe drives after around 300-400 runs, and should be aware of possible data corruption arising from NVMe failure.

If you have any additional questions about this topic, please feel free to reach out to us.

6. Fast Invocation