# Streaming wideband data using Cyan

## 1. Introduction

The purpose of this application note is to describe the mechanism used to stream wideband data streams from Cyan Rx channels to the FPGA. This application note assumes that you’ve purchased a complete host machine appropriate for your specific application. Per Vices recording solutions presently provide support for capturing instantaneous RF streams of between 4-24GHz. For more information, including upcoming support for wider capture bandwidths, please contact us.

## 2. Requirements

The following instructions assume a host machine provisioned and configured dual NICs (Napatech NT200A02-SCC SmartNIC), and sufficient fast NVMe storage and bulk storage.

It refers to the following files;

• sdr2disk
• A utility to perform wideband capture of all packets on the interface, and save them to disk.
• parse_pcap.sh
• A utility to parse the pcap files and convert them to binary sample files.
• plot_vita_pcapng.py
• Example python visualization script that plots the first 5000 samples.

## 3. Setup

1. You must make sure the following command works:

$uhd_find_devices If this does not work, you will most likely need to follow the Network Configuration How-to. 2. You will also need to ensure that your NIC is configured for optimal performance, particularly with the MTU size and network buffer size. See sections four and five on the Performance Tuning How-to Guide. 3. When plotting with matplotlib and our script, you will need to have pcapng installed (note: you may need to use “pip3 install” for Python 3 in the above command). To install, please do the following: $ sudo -i
$pip install python-pcapng 4. The firewall on your network may be an issue as well. To ensure that this is not an issue with all qSFP+ and MGMT ports, run the following with the name of the interface designated for qSFP+. To find the interfaces on your host system, run the following: $ ip addr show
$sudo firewall-cmd --zone=trusted --permanent --add-interface=enp73s0f0np0 IMPORTANT NOTE: When configuring the firewall, you will need to turn the unit off and on again for it to work. 5. Make sure to clone the examples repo at Per Vices Github, which is where the Python script for plotting is. $ git clone https://github.com/pervices/examples.git

## 4. Instructions

The wideband capturing of streams is implemented in two parts; first, we capture the packet stream from the Radio and record it to fast storage. Then, we parse the pcapng files to extract payload data for storage and display.

Note

YOU MUST save all your pcap captures in the /example/sdr2disk folder otherwise the script will not run.

### 4.1. Start streaming

Note

These instructions are made for the 3Gigasample/second RX boards. If you want to stream for 1Gigasample/second, please change the rate in the rx_start program flags. We will be streaming on channels A, B, C, and D.

To begin streaming run rx_start in uhd examples. On ubuntu it will be located in /lib/uhd/examples for the version of uhd installed in the system. If you are using a locally compiled version the examples will be found in host/build/examples. Use –help for an explaination of the arguments.

1. For baseband Rx streaming, with an RF input set at 310 MHz at -17dB:

$./rx_start --dsp-freq 0 --lo-freq 0 --rate 3000000000 --channels 0,1,2,3 --gain 64 2. For midband Rx streaming, with an RF input set at 510MHz at -3dB. (NOTE: this RF input is for Oracle. For Archlinux, the input power was -16dB to get similar results). $ ./rx_start --dsp-freq 0 --lo-freq 500000000 --rate 3000000000 --channels 0,1,2,3 --gain 100

3. For highband Rx streaming, with an RF input set as 8.1GHz at -4dB.

$./rx_start --dsp-freq 0 --lo-freq 8000000000 --rate 3000000000 --channels 0,1,2,3 --gain 70 –dsp-freq adjusts the cordic shift, –lo-freq adjusts the lo shift, and –rate adjusts the sample rate. dsp-freq can be negative. If the command does not work correctly try running the command to stop streaming before trying again. ### 4.2. Option 1: Capturing Stream with Wireshark Note This will work with any operating system with wireshark installed. Open up wireshark when streaming data. In Wireshark, limit to 270 Kilobytes of capture. Save the data in the same folder as the plot_vita_pcapng.py file /examples/sdr2disk. ### 4.3. Option 2: Capturing Stream with sdr2disk The sdr2disk script starts n2disk threads to capture the information being streamed to the sfp ports. Each n2disk thread has its own RAID controller to save the streamed data to, which can be found at /storage0, /storage1, /storage2 and /storage3. The n2disk settings can be tweaked to allow for different file sizes and setting a capture limit. To capture the data streaming off cyan, run sdr2disk as super user (ie. sudo ./sdr2disk) which is generally located in ~/scripts/sdr2disk, specifying the ports you want to capture from, duration of capture and filename for the capture: Usage : ./sdr2disk -n [sfpA &| sfpB &| sfpC &| sfpD] -t [CAPTURE TIME IN SECONDS] -o [FILENAME] Any combination of ports can be used seperated by commas, please make sure you have enough storage for the specified capture time, filename will be followed by date and time of run. Examples: ./sdr2disk.sh -p sfpA,sfpB,sfpC,sfpD -t 10 -o run1 ./sdr2disk.sh -p sfpA,sfpB,sfpD -t 100 -o run2 ./sdr2disk.sh -p sfpC,sfpD -t 500 -o run3 ./sdr2disk.sh -p sfpA -t 1000 -o run4  Note 1: sfpA connects to ntxs0, sfpB connects to ntxs1, sfpC connects to ntxs2, sfpD connects to ntxs3. Note 2: The captures are currently stored in sfpA:/storage0/storage, sfpB:/storage1/storage, sfpC:/storage2/storage, sfpD:/storage3/storage, under the most recent time stamped directory. This can be modified in the sdr2disk script. ### 4.4. Stop Streaming To stop streaming run rx_stop (located in /lib/uhd/examples, the same as rx_start), specifying the channels you want to stop. For example to stop streaming on channel A, B, C and D, you would use the command: ./rx_stop --channels 0,1,2,3 or for all channels: ./rx_stop --channels 0,1,2,3,4,5,6,7 ### 4.5. Plotting the Wireshark pcapng file To plot the data captured in wireshark, run the following: $ python plot_vita_pcapng.py File_name.pcapng Destination_IP destination_port 12bits

Note

For 3gigasample/second Rx boards, you will need to ensure this is 12bits in the above command!

As an example, the following command would work:

$python plot_vita_pcapng.py TMidBand-510MHz-rf.pcapng 10.10.10.10 42836 12bits You will also be met with a series of prompts. You will need to answer as such (yes, no, yes): $  Would you like to process all the packets ? y or n
$y$  Would you like to swap bytes of each samples ? y or n
$n$  Would you like to see the graph? y or n

## 7. Notes

When using this program in production, note that storing large amounts of data >12min, will cause VERY HEAVY NVMe drive usage. As throughput is a critical system requirement, the NVMe drives shipped with our host machines are heavily optimized for throughput, as opposed to reliability.

This is because the underlying system architecture needs to balance the available PCIe lanes that are connnected to each CPU with the largest possible storage and throughput. Based on extensive benchmarking, we’ve observed that enterprise NVMe drives, though providing substantially greater reliability and designed to sustain a high number of daily full disk writes, simply do not have the absolute performance requirement necessary to sustain the required disk throughput when streaming.

As a consequence of this design requirement, we strongly urge customers to treat the included NVMe drives as consumable items. Consider that a 10minute capture on two 40Gbps channels effectively consists of an entire full disk write. As a ballpark figure, consider that consumer NVMe drives generally aim to support, on average, around 600 FTW over their lifetime. However, as host machines generally require between 16-32 such SSD drives, the statistical likelyhood of any one of those NVMe drives failing is correspondingly higher. Thus, we suggest that customers replace all NVMe drives after around 300-400 runs, and should be aware of possible data corruption arising from NVMe failure.