The Quartz64 and its PCIe port
The Quartz64 is the latest SBC from Pine64 . It’s based on the RK3566 CPU running @2Ghz with up to 8GB of RAM (mine has 8GB) and provides a PCIe 4x port!
This PCIe port is actually what I find amazing about this board. I know that the RockPro64 also provides this port but I haven’t had the opportunity to get my hands on this one board. The PCIe port allows you to connect almost all PCIe board that are supported by Linux, except GPUs, which are not supported (yet?).
Since I’ve received my board, I always wanted to try something with the PCIe port. My eventual goal is to build a NAS using a SATA controller board plugged on the PCIe port. Unfortunately, I do not have any SATA board right now, but I have this nice 4*GBE Intel NIC so… let’s plug it and see what happens!
Connect the NIC to the Quartz64
My Quartz64 is running the latest Manjaro image . As the video output does not work yet, I used the serial port to interact with the board.
So, I plugged the board and checked the kernel log:
# cat kernel.txt |grep e1000
[ 6.733515] e1000e: Intel(R) PRO/1000 Network Driver
[ 6.733532] e1000e: Copyright(c) 1999 - 2015 Intel Corporation.
[ 6.733798] e1000e 0000:03:00.0: enabling device (0000 -> 0002)
[ 6.734151] e1000e 0000:03:00.0: Interrupt Throttling Rate (ints/sec) set to dynamic conservative mode
[ 6.968585] e1000e 0000:03:00.0 eth1: (PCI Express:2.5GT/s:Width x4) 00:15:17:67:25:bd
[ 6.969078] e1000e 0000:03:00.0 eth1: Intel(R) PRO/1000 Network Connection
[ 6.969176] e1000e 0000:03:00.0 eth1: MAC: 0, PHY: 4, PBA No: D64202-003
[ 6.969488] e1000e 0000:03:00.1: enabling device (0000 -> 0002)
[ 6.969841] e1000e 0000:03:00.1: Interrupt Throttling Rate (ints/sec) set to dynamic conservative mode
[ 7.193174] e1000e 0000:03:00.1 eth2: (PCI Express:2.5GT/s:Width x4) 00:15:17:67:25:bc
[ 7.193199] e1000e 0000:03:00.1 eth2: Intel(R) PRO/1000 Network Connection
[ 7.193290] e1000e 0000:03:00.1 eth2: MAC: 0, PHY: 4, PBA No: D64202-003
[ 7.193606] e1000e 0000:04:00.0: enabling device (0000 -> 0002)
[ 7.193978] e1000e 0000:04:00.0: Interrupt Throttling Rate (ints/sec) set to dynamic conservative mode
[ 7.398738] e1000e 0000:04:00.0 eth3: (PCI Express:2.5GT/s:Width x4) 00:15:17:67:25:bf
[ 7.398758] e1000e 0000:04:00.0 eth3: Intel(R) PRO/1000 Network Connection
[ 7.398849] e1000e 0000:04:00.0 eth3: MAC: 0, PHY: 4, PBA No: D64202-003
[ 7.399133] e1000e 0000:04:00.1: enabling device (0000 -> 0002)
[ 7.399506] e1000e 0000:04:00.1: Interrupt Throttling Rate (ints/sec) set to dynamic conservative mode
[ 7.577558] e1000e 0000:04:00.1 eth4: (PCI Express:2.5GT/s:Width x4) 00:15:17:67:25:be
[ 7.577581] e1000e 0000:04:00.1 eth4: Intel(R) PRO/1000 Network Connection
[ 7.577702] e1000e 0000:04:00.1 eth4: MAC: 0, PHY: 4, PBA No: D64202-003
[ 7.606474] e1000e 0000:03:00.1 enp3s0f1: renamed from eth2
[ 7.653732] e1000e 0000:03:00.0 enp3s0f0: renamed from eth1
[ 7.714379] e1000e 0000:04:00.1 enp4s0f1: renamed from eth4
[ 7.760788] e1000e 0000:04:00.0 enp4s0f0: renamed from eth3
Nice, the board is detected!
Do we have any network interface ?
# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ip6tnl0@NONE: <NOARP> mtu 1452 qdisc noop state DOWN group default qlen 1000
link/tunnel6 :: brd :: permaddr dacc:5200:9836::
3: eth0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
link/ether 46:ff:d6:26:1f:eb brd ff:ff:ff:ff:ff:ff
4: enp3s0f0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel state DOWN group default qlen 1000
link/ether 00:15:17:67:25:bd brd ff:ff:ff:ff:ff:ff
5: enp3s0f1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel state DOWN group default qlen 1000
link/ether 00:15:17:67:25:bc brd ff:ff:ff:ff:ff:ff
6: enp4s0f0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel state DOWN group default qlen 1000
link/ether 00:15:17:67:25:bf brd ff:ff:ff:ff:ff:ff
7: enp4s0f1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel state DOWN group default qlen 1000
link/ether 00:15:17:67:25:be brd ff:ff:ff:ff:ff:ff
Yes we do!
I did a few more tests : the board works correctly and is able to saturate the gigabit link, awesome!
Benchmark with 4 devices
Let’s try something : connect as much devices to the board as I can and run iperf
to see what performances it can reach out of the box.
My setup:
- 1 x86 desktop computer
- 1 x86 laptop
- 1 PinebookPro connected to the network using the PinebookPro Dock (USB3)
- 1 Pinephone, connected to the network via a USB-C hub
I installed iperf3
on each device and then started the test.
First, I needed to configure an IP address for all the interfaces. The native interface of the board is connected to my DHCP network. The other 4 from the external NIC needed a static IP address:
ip addr add 10.129.10.1/16 dev enp3s0f0
ip addr add 10.129.20.1/16 dev enp3s0f1
ip addr add 10.129.30.1/16 dev enp4s0f0
ip addr add 10.129.40.1/16 dev enp4s0f1
Then, on the Quartz64, I launched 4 instances of iperf in server mode:
iperf -s -p 999[6..9]
And on the 4 clients:
iperf -c 10.129.10.1 -p 9999
I also installed bwm-ng
on the Quartz64 to monitor the bandwidth on all the interfaces:
As you can see, a total of 254426.70KB/s was measured. It’s ~250MB/s, 0.25GB/s or ~2.5Gb/s which is quite nice. htop
shows that the CPU is probably the bottleneck here, as 1 core is used at 100%.
As a side note, I noticed that the Quartz64 was barely warm, while the NIC was burning hot! I think this board is designed to be installed into a server setup, with a lot of fan blowing air directly onto the heatsink.