In the previous article , I detailed all the steps I took to get started with the QuartzPro64, and I ended up with a Linux system running on kernel 6.1 (this commit from neggles’ linux-quartz64 branch ).
In this article, I would like to go into more details on the current status of the software in (nearly) mainline Linux. Note that things related to the RK3588 and the QuartzPro64 are moving fast and that the content of this post will probably become obsolete quite soon!
TL;DR: both integrated NICs and PCIe work, USB and HDMI in/outs do not.
Heatsink
Since I plan on running benchmarks and build kernels directly on the board, and since the board provides mounting holes, I decided to install a heatsink on it. I used this black “northbridge” heatsink together with a piece of thermal pad I bought for another project. I’m not quite happy with the result, but I guess it’ll do the job.
CPU
Here is the output of lscpu
:
# lscpu
Architecture: aarch64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Vendor ID: ARM
Model name: Cortex-A55
Model: 0
Thread(s) per core: 1
Core(s) per socket: 4
Socket(s): 1
Stepping: r2p0
BogoMIPS: 48.00
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp
asimdhp cpuid asimdrdm lrcpc dcpop asimddp
Model name: Cortex-A76
Model: 0
Thread(s) per core: 1
Core(s) per socket: 4
Socket(s): 1
Stepping: r4p0
BogoMIPS: 48.00
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp
asimdhp cpuid asimdrdm lrcpc dcpop asimddp
Caches (sum of all):
L1d: 384 KiB (8 instances)
L1i: 384 KiB (8 instances)
L2: 2.5 MiB (8 instances)
L3: 3 MiB (1 instance)
NUMA:
NUMA node(s): 1
NUMA node0 CPU(s): 0-7
Vulnerabilities:
Itlb multihit: Not affected
L1tf: Not affected
Mds: Not affected
Meltdown: Not affected
Mmio stale data: Not affected
Retbleed: Not affected
Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Spectre v1: Mitigation; __user pointer sanitization
Spectre v2: Mitigation; CSV2, BHB
Srbds: Not affected
Tsx async abort: Not affected
Looks good! Not that neggles’s kernel probably runs that CPU at 1.2Ghz since it does provide drivers to enable cpufreq
. According to the wiki
, SRE’s branch
integrates them, but I haven’t tried it (yet?).
PCIe
PCIe works out of the box, thanks to the rockchip-dw-pcie
driver.
#dmesg |grep -i pci
[ 0.012545] PCI/MSI: /interrupt-controller@fe600000/msi-controller@fe640000 domain created
[ 0.013309] PCI/MSI: /interrupt-controller@fe600000/msi-controller@fe660000 domain created
[ 0.104171] reg-fixed-voltage vcc3v3-pcie30-regulator: nonexclusive access to GPIO for vcc3v3-pcie30-regulator
[ 0.137900] PCI: CLS 0 bytes, default 64
[ 0.192452] rockchip-dw-pcie a40000000.pcie: host bridge /pcie@fe150000 ranges:
[ 0.193114] rockchip-dw-pcie a40000000.pcie: IO 0x00f0100000..0x00f01fffff -> 0x00f0100000
[ 0.193895] rockchip-dw-pcie a40000000.pcie: MEM 0x00f0200000..0x00f0ffffff -> 0x00f0200000
[ 0.194668] rockchip-dw-pcie a40000000.pcie: MEM 0x0900000000..0x093fffffff -> 0x0900000000
[ 0.203457] rockchip-dw-pcie a40000000.pcie: iATU unroll: enabled
[ 0.203998] rockchip-dw-pcie a40000000.pcie: iATU regions: 8 ob, 8 ib, align 64K, limit 8G
[ 1.311784] rockchip-dw-pcie a40000000.pcie: Phy link never came up
[ 1.312375] rockchip-dw-pcie a40000000.pcie: PCI host bridge to bus 0000:00
[ 1.312993] pci_bus 0000:00: root bus resource [bus 00-0f]
[ 1.313480] pci_bus 0000:00: root bus resource [io 0x0000-0xfffff] (bus address [0xf0100000-0xf01fffff])
[ 1.314323] pci_bus 0000:00: root bus resource [mem 0xf0200000-0xf0ffffff]
[ 1.314935] pci_bus 0000:00: root bus resource [mem 0x900000000-0x93fffffff]
[ 1.315566] pci 0000:00:00.0: [1d87:3588] type 01 class 0x060400
[ 1.316113] pci 0000:00:00.0: reg 0x10: [mem 0x00000000-0x3fffffff]
[ 1.316674] pci 0000:00:00.0: reg 0x14: [mem 0x00000000-0x3fffffff]
[ 1.317235] pci 0000:00:00.0: reg 0x38: [mem 0x00000000-0x0000ffff pref]
[ 1.317873] pci 0000:00:00.0: supports D1 D2
[ 1.318249] pci 0000:00:00.0: PME# supported from D0 D1 D3hot
[ 1.320377] pci_bus 0000:01: busn_res: can not insert [bus 01-ff] under [bus 00-0f] (conflicts with (null) [bus 00-0f])
[ 1.321396] pci 0000:00:00.0: BAR 0: no space for [mem size 0x40000000]
[ 1.321983] pci 0000:00:00.0: BAR 0: failed to assign [mem size 0x40000000]
[ 1.322601] pci 0000:00:00.0: BAR 1: no space for [mem size 0x40000000]
[ 1.323186] pci 0000:00:00.0: BAR 1: failed to assign [mem size 0x40000000]
[ 1.323813] pci 0000:00:00.0: BAR 6: assigned [mem 0xf0200000-0xf020ffff pref]
[ 1.324450] pci 0000:00:00.0: PCI bridge to [bus 01-ff]
[ 1.325752] pcieport 0000:00:00.0: PME: Signaling with IRQ 40
[ 1.326377] pcieport 0000:00:00.0: AER: enabled with IRQ 41
[ 1.440658] rockchip-dw-pcie fe180000.pcie: host bridge /pcie@fe180000 ranges:
[ 1.441331] rockchip-dw-pcie fe180000.pcie: IO 0x00f3100000..0x00f31fffff -> 0x00f3100000
[ 1.442101] rockchip-dw-pcie fe180000.pcie: MEM 0x00f3200000..0x00f3ffffff -> 0x00f3200000
[ 1.442864] rockchip-dw-pcie fe180000.pcie: MEM 0x09c0000000..0x09ffffffff -> 0x09c0000000
[ 1.443727] rockchip-dw-pcie fe180000.pcie: iATU unroll: enabled
[ 1.444281] rockchip-dw-pcie fe180000.pcie: iATU regions: 8 ob, 8 ib, align 64K, limit 8G
[ 1.651821] rockchip-dw-pcie fe180000.pcie: PCIe Gen.1 x1 link up
[ 1.652449] rockchip-dw-pcie fe180000.pcie: PCI host bridge to bus 0003:30
[ 1.653053] pci_bus 0003:30: root bus resource [bus 30-3f]
[ 1.653536] pci_bus 0003:30: root bus resource [io 0x100000-0x1fffff] (bus address [0xf3100000-0xf31fffff])
[ 1.654394] pci_bus 0003:30: root bus resource [mem 0xf3200000-0xf3ffffff]
[ 1.654995] pci_bus 0003:30: root bus resource [mem 0x9c0000000-0x9ffffffff]
[ 1.655633] pci 0003:30:00.0: [1d87:3588] type 01 class 0x060400
[ 1.656194] pci 0003:30:00.0: reg 0x38: [mem 0x00000000-0x0000ffff pref]
[ 1.656832] pci 0003:30:00.0: supports D1 D2
[ 1.657208] pci 0003:30:00.0: PME# supported from D0 D1 D3hot
[ 1.660998] pci 0003:30:00.0: Primary bus is hard wired to 0
[ 1.661495] pci 0003:30:00.0: bridge configuration invalid ([bus 01-ff]), reconfiguring
[ 1.662337] pci 0003:31:00.0: [10ec:8168] type 00 class 0x020000
[ 1.662930] pci 0003:31:00.0: reg 0x10: [io 0x0000-0x00ff]
[ 1.663492] pci 0003:31:00.0: reg 0x18: [mem 0x00000000-0x00000fff 64bit]
[ 1.664144] pci 0003:31:00.0: reg 0x20: [mem 0x00000000-0x00003fff 64bit]
[ 1.665153] pci 0003:31:00.0: supports D1 D2
[ 1.665530] pci 0003:31:00.0: PME# supported from D0 D1 D2 D3hot D3cold
[ 1.675836] pci_bus 0003:31: busn_res: [bus 31-3f] end is updated to 31
[ 1.676432] pci 0003:30:00.0: BAR 14: assigned [mem 0xf3200000-0xf32fffff]
[ 1.677036] pci 0003:30:00.0: BAR 6: assigned [mem 0xf3300000-0xf330ffff pref]
[ 1.677670] pci 0003:30:00.0: BAR 13: assigned [io 0x100000-0x100fff]
[ 1.678245] pci 0003:31:00.0: BAR 4: assigned [mem 0xf3200000-0xf3203fff 64bit]
[ 1.679386] pci 0003:31:00.0: BAR 2: assigned [mem 0xf3204000-0xf3204fff 64bit]
[ 1.680074] pci 0003:31:00.0: BAR 0: assigned [io 0x100000-0x1000ff]
[ 1.680651] pci 0003:30:00.0: PCI bridge to [bus 31]
[ 1.681088] pci 0003:30:00.0: bridge window [io 0x100000-0x100fff]
[ 1.681654] pci 0003:30:00.0: bridge window [mem 0xf3200000-0xf32fffff]
[ 1.683954] pcieport 0003:30:00.0: PME: Signaling with IRQ 98
[ 1.684798] pcieport 0003:30:00.0: AER: enabled with IRQ 99
[ 1.686439] rockchip-dw-pcie a40800000.pcie: host bridge /pcie@fe170000 ranges:
[ 1.687096] rockchip-dw-pcie a40800000.pcie: IO 0x00f2100000..0x00f21fffff -> 0x00f2100000
[ 1.687890] rockchip-dw-pcie a40800000.pcie: MEM 0x00f2200000..0x00f2ffffff -> 0x00f2200000
[ 1.688661] rockchip-dw-pcie a40800000.pcie: MEM 0x0980000000..0x09bfffffff -> 0x0980000000
[ 1.689517] rockchip-dw-pcie a40800000.pcie: iATU unroll: enabled
[ 1.690052] rockchip-dw-pcie a40800000.pcie: iATU regions: 8 ob, 8 ib, align 64K, limit 8G
[ 1.895828] rockchip-dw-pcie a40800000.pcie: PCIe Gen.1 x1 link up
[ 1.896447] rockchip-dw-pcie a40800000.pcie: PCI host bridge to bus 0002:20
[ 1.897058] pci_bus 0002:20: root bus resource [bus 20-2f]
[ 1.897541] pci_bus 0002:20: root bus resource [io 0x200000-0x2fffff] (bus address [0xf2100000-0xf21fffff])
[ 1.898397] pci_bus 0002:20: root bus resource [mem 0xf2200000-0xf2ffffff]
[ 1.898998] pci_bus 0002:20: root bus resource [mem 0x980000000-0x9bfffffff]
[ 1.899631] pci 0002:20:00.0: [1d87:3588] type 01 class 0x060400
[ 1.900188] pci 0002:20:00.0: reg 0x38: [mem 0x00000000-0x0000ffff pref]
[ 1.900821] pci 0002:20:00.0: supports D1 D2
[ 1.901196] pci 0002:20:00.0: PME# supported from D0 D1 D3hot
[ 1.904836] pci 0002:20:00.0: Primary bus is hard wired to 0
[ 1.905334] pci 0002:20:00.0: bridge configuration invalid ([bus 01-ff]), reconfiguring
[ 1.906171] pci 0002:21:00.0: [14e4:449d] type 00 class 0x028000
[ 1.906782] pci 0002:21:00.0: reg 0x10: [mem 0x00000000-0x0000ffff 64bit]
[ 1.907424] pci 0002:21:00.0: reg 0x18: [mem 0x00000000-0x003fffff 64bit]
[ 1.908559] pci 0002:21:00.0: supports D1 D2
[ 1.908936] pci 0002:21:00.0: PME# supported from D0 D1 D2 D3hot D3cold
[ 1.919821] pci_bus 0002:21: busn_res: [bus 21-2f] end is updated to 21
[ 1.920415] pci 0002:20:00.0: BAR 14: assigned [mem 0xf2200000-0xf27fffff]
[ 1.921019] pci 0002:20:00.0: BAR 6: assigned [mem 0xf2800000-0xf280ffff pref]
[ 1.921655] pci 0002:21:00.0: BAR 2: assigned [mem 0xf2400000-0xf27fffff 64bit]
[ 1.922332] pci 0002:21:00.0: BAR 0: assigned [mem 0xf2200000-0xf220ffff 64bit]
[ 1.923010] pci 0002:20:00.0: PCI bridge to [bus 21]
[ 1.923447] pci 0002:20:00.0: bridge window [mem 0xf2200000-0xf27fffff]
[ 1.925654] pcieport 0002:20:00.0: PME: Signaling with IRQ 110
[ 1.926497] pcieport 0002:20:00.0: AER: enabled with IRQ 111
Here is the output of lspci
:
# lspci
0000:00:00.0 PCI bridge: Rockchip Electronics Co., Ltd RK3588 (rev 01)
0002:20:00.0 PCI bridge: Rockchip Electronics Co., Ltd RK3588 (rev 01)
0002:21:00.0 Network controller: Broadcom Inc. and subsidiaries Device 449d (rev 02)
0003:30:00.0 PCI bridge: Rockchip Electronics Co., Ltd RK3588 (rev 01)
0003:31:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 15)
As you can see, 2 network interfaces are detected : the first one is probably the one connected to the GMAC interface of the RK3588, and the second one is a Realtek NIC connected to a PCIe port of the CPU.
I have 2 PCIe boards on hands : a dual SATA controller and a 4 Gbe NIC. Let’s plug them in the PCIe socket!
PCIe SATA controller
I’m using the Pine64 SATA controller :
lspci
shows that the board is correctly detected:
# lspci
...
0000:01:00.0 SATA controller: JMicron Technology Corp. JMB58x AHCI SATA controller
...
lsblk
shows the SATA SSD connected to the board:
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 931.5G 0 disk
|-sda1 8:1 0 300M 0 part
|-sda2 8:2 0 922.4G 0 part
`-sda3 8:3 0 8.8G 0 part
...
hdparm
shows that it performs quite well:
# hdparm -tT /dev/sda
/dev/sda:
Timing cached reads: 9672 MB in 2.00 seconds = 4843.71 MB/sec
Timing buffered disk reads: 1400 MB in 3.00 seconds = 466.43 MB/sec
Intel 4xGbe NIC
Again, lspci
confirm that the board and its 4 interfaces are correctly detected:
# lspci
0000:01:00.0 PCI bridge: Microsemi / PMC / IDT PES12N3A 12-lane 3-Port PCI Express Switch (rev 0e)
0000:02:02.0 PCI bridge: Microsemi / PMC / IDT PES12N3A 12-lane 3-Port PCI Express Switch (rev 0e)
0000:02:04.0 PCI bridge: Microsemi / PMC / IDT PES12N3A 12-lane 3-Port PCI Express Switch (rev 0e)
0000:03:00.0 Ethernet controller: Intel Corporation 82571EB/82571GB Gigabit Ethernet Controller (Copper) (rev 06)
0000:03:00.1 Ethernet controller: Intel Corporation 82571EB/82571GB Gigabit Ethernet Controller (Copper) (rev 06)
0000:04:00.0 Ethernet controller: Intel Corporation 82571EB/82571GB Gigabit Ethernet Controller (Copper) (rev 06)
0000:04:00.1 Ethernet controller: Intel Corporation 82571EB/82571GB Gigabit Ethernet Controller (Copper) (rev 06)
I connected 2 of the 4 interfaces to my LAN, and they received an IP address via DHCP:
# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp3s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:15:17:67:25:bd brd ff:ff:ff:ff:ff:ff
inet 192.168.1.229/24 metric 1024 brd 192.168.1.255 scope global dynamic enp3s0f0
valid_lft 43140sec preferred_lft 43140sec
inet6 fdc0:4222:ba57::e35/128 scope global noprefixroute
valid_lft forever preferred_lft forever
inet6 fdc0:4222:ba57:0:215:17ff:fe67:25bd/64 scope global dynamic mngtmpaddr noprefixroute
valid_lft 7137sec preferred_lft 1737sec
inet6 2a02:2788:925:e33a:215:17ff:fe67:25bd/64 scope global dynamic mngtmpaddr noprefixroute
valid_lft 7137sec preferred_lft 1737sec
inet6 fe80::215:17ff:fe67:25bd/64 scope link
valid_lft forever preferred_lft forever
3: enp3s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:15:17:67:25:bc brd ff:ff:ff:ff:ff:ff
inet 192.168.1.228/24 metric 1024 brd 192.168.1.255 scope global dynamic enp3s0f1
valid_lft 43140sec preferred_lft 43140sec
inet6 fdc0:4222:ba57::529/128 scope global noprefixroute
valid_lft forever preferred_lft forever
inet6 fdc0:4222:ba57:0:215:17ff:fe67:25bc/64 scope global dynamic mngtmpaddr noprefixroute
valid_lft 7137sec preferred_lft 1737sec
inet6 2a02:2788:925:e33a:215:17ff:fe67:25bc/64 scope global dynamic mngtmpaddr noprefixroute
valid_lft 7137sec preferred_lft 1737sec
inet6 fe80::215:17ff:fe67:25bc/64 scope link
valid_lft forever preferred_lft forever
4: enp4s0f0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast state DOWN group default qlen 1000
link/ether 00:15:17:67:25:bf brd ff:ff:ff:ff:ff:ff
5: enp4s0f1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast state DOWN group default qlen 1000
link/ether 00:15:17:67:25:be brd ff:ff:ff:ff:ff:ff
6: enP3p49s0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast state DOWN group default qlen 1000
link/ether a6:84:24:95:e4:98 brd ff:ff:ff:ff:ff:ff
7: end0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
link/ether 6e:a5:0b:37:8d:0f brd ff:ff:ff:ff:ff:ff
iperf3
confirms that they are working fine at 1000Mbps:
# iperf3 -s
-----------------------------------------------------------
Server listening on 5201 (test #1)
-----------------------------------------------------------
Accepted connection from 192.168.1.221, port 36574
[ 5] local 192.168.1.229 port 5201 connected to 192.168.1.221 port 36588
[ ID] Interval Transfer Bitrate
[ 5] 0.00-1.00 sec 112 MBytes 941 Mbits/sec
[ 5] 1.00-2.00 sec 112 MBytes 941 Mbits/sec
[ 5] 2.00-3.00 sec 112 MBytes 941 Mbits/sec
[ 5] 3.00-4.00 sec 112 MBytes 939 Mbits/sec
[ 5] 4.00-5.00 sec 112 MBytes 940 Mbits/sec
[ 5] 5.00-6.00 sec 112 MBytes 941 Mbits/sec
[ 5] 6.00-7.00 sec 112 MBytes 941 Mbits/sec
[ 5] 7.00-8.00 sec 112 MBytes 939 Mbits/sec
[ 5] 8.00-9.00 sec 112 MBytes 937 Mbits/sec
[ 5] 9.00-10.00 sec 112 MBytes 941 Mbits/sec
[ 5] 10.00-10.00 sec 74.9 KBytes 873 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate
[ 5] 0.00-10.00 sec 1.09 GBytes 940 Mbits/sec receiver
-----------------------------------------------------------
Server listening on 5201 (test #2)
-----------------------------------------------------------
Accepted connection from 192.168.1.221, port 37414
[ 5] local 192.168.1.229 port 5201 connected to 192.168.1.221 port 37416
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 114 MBytes 958 Mbits/sec 0 383 KBytes
[ 5] 1.00-2.00 sec 112 MBytes 938 Mbits/sec 0 383 KBytes
[ 5] 2.00-3.00 sec 113 MBytes 947 Mbits/sec 0 383 KBytes
[ 5] 3.00-4.00 sec 112 MBytes 939 Mbits/sec 0 447 KBytes
[ 5] 4.00-5.00 sec 113 MBytes 946 Mbits/sec 0 447 KBytes
[ 5] 5.00-6.00 sec 112 MBytes 942 Mbits/sec 0 447 KBytes
[ 5] 6.00-7.00 sec 112 MBytes 936 Mbits/sec 0 447 KBytes
[ 5] 7.00-8.00 sec 113 MBytes 946 Mbits/sec 0 447 KBytes
[ 5] 8.00-9.00 sec 112 MBytes 940 Mbits/sec 0 447 KBytes
[ 5] 9.00-10.00 sec 112 MBytes 938 Mbits/sec 0 447 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 1.10 GBytes 943 Mbits/sec 0 sender
Onboard Network interfaces
Since they were detected with lspci
, we can expect the onboard NICs to just work… and they do!
# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enP3p49s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether a6:84:24:95:e4:98 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.223/24 metric 1024 brd 192.168.1.255 scope global dynamic enP3p49s0
valid_lft 43194sec preferred_lft 43194sec
inet6 fdc0:4222:ba57::358/128 scope global noprefixroute
valid_lft forever preferred_lft forever
inet6 fdc0:4222:ba57:0:a484:24ff:fe95:e498/64 scope global dynamic mngtmpaddr noprefixroute
valid_lft 7197sec preferred_lft 1797sec
inet6 2a02:2788:925:e33a:a484:24ff:fe95:e498/64 scope global dynamic mngtmpaddr noprefixroute
valid_lft 7197sec preferred_lft 1797sec
inet6 fe80::a484:24ff:fe95:e498/64 scope link
valid_lft forever preferred_lft forever
3: end0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 6e:a5:0b:37:8d:0f brd ff:ff:ff:ff:ff:ff
inet 192.168.1.167/24 metric 1024 brd 192.168.1.255 scope global dynamic end0
valid_lft 43194sec preferred_lft 43194sec
inet6 fdc0:4222:ba57::3ca/128 scope global noprefixroute
valid_lft forever preferred_lft forever
inet6 fdc0:4222:ba57:0:6ca5:bff:fe37:8d0f/64 scope global dynamic mngtmpaddr noprefixroute
valid_lft 7197sec preferred_lft 1797sec
inet6 2a02:2788:925:e33a:6ca5:bff:fe37:8d0f/64 scope global dynamic mngtmpaddr noprefixroute
valid_lft 7197sec preferred_lft 1797sec
inet6 fe80::6ca5:bff:fe37:8d0f/64 scope link
valid_lft forever preferred_lft forever
end0
corresponds to the ‘GMAC’ interface labelled ‘RJ45’ on the PCB. enP3p49s0
is the Realtek NIC labelled ‘PCIe RJ45’.
iperf3
shows similar results that on the Intel PCIe NIC : they work at full 1Gbps!
SATA
The board exposes 2 SATA connectors. Currently, only the 1st one (SATA30_0
) is working. The second one seems a bit tricky since it shares data lines with the PCIe lines going to the onboard WiFi chips, and developers were not able to enable it yet.
Let’s connect a SATA SSD to the SATA30_0
connector:
Again, lsblk
detects the SSD:
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 931.5G 0 disk
|-sda1 8:1 0 300M 0 part
|-sda2 8:2 0 922.4G 0 part
`-sda3 8:3 0 8.8G 0 part
And hdparm
shows similar result than when it was connected to the PCIe SATA controller:
# hdparm -tT /dev/sda
/dev/sda:
Timing cached reads: 9782 MB in 2.00 seconds = 4899.11 MB/sec
Timing buffered disk reads: 1250 MB in 3.00 seconds = 416.30 MB/sec
Thermals
There are 7 thermal zones exposed in /sys/class/thermal
:
# cat /sys/class/thermal/thermal_zone*/temp
29615
30538
29615
29615
29615
29615
29615
So the CPU is running at 30°C idle, when the room temperature is around 20°C.
Wrap up
I have to say that I’m positively impressed by the current software support in nearly mainline Linux : we already have a working CPU with PCIe, Gbe NIC, SD card, eMMC, SATA.
There’s still a lot of work to be done before we can say the CPU and board are fully supported : USB2 and USB3, HDMI in and outs, and probably many other things under the hood still need to be implemented and integrated in the Linux kernel!
A reason why I’m writing this post is that I couldn’t find a lot of up-to-date information about the progress of the software development for the RK3588 and the support for the QuartzPro64. If you are working on something related to the QuartzPro64, please, let us know!