My understanding is that the AM4 socket is limited to the following:
1 16x PCIe 4.0 direct wired to AM4 (typically main PCIe 16x slot for GPU or split to 8x + 8x or 8x + 4x + 4x across 2 or 3 PCIe slots)
1 4x PCIe 4.0 direct wired to AM4 (typically M.2 slot for NVMe storage device)
1 4x PCIe 4.0 PCH (LAN, USB, SATA, etc)
I have the following that I want to plug into a X570 mobo:
RTX 3080 / 6900 XT (depending on which I can get my hands on first)
Mellanox MCX4121A dual 25 Gpbs NIC (8x PCIe 3.0)
Sabrent Rocket Q4 NVMe 4.0
I'm thinking that the best compromise would be to run those 3 devices as either:
GPU 8x PCIe 4.0 in PCIe 16x slot 1
NIC 8x PCIe 3.0 in PCIe 16x slot 2 (or 3)
NVMe 4x PCIe 4.0 in M.2 slot
Or: (not sure this will work since that effectively leaves no PCIe lanes for the PCH chipset)
GPU 16x PCIe 4.0 in PCIe 16x slot 1
NIC 4x PCIe 3.0 in PCIe 16x slot 2 (or 3)
NVMe 4x PCIe 4.0 in M.2 slot
Granted 4.0 is effectively twice as fast as 3.0, so maybe I won't take a performance hit by running the GPU @ 8x (assuming a PCIe 4.0 capable GPU of course).
PCIe 3.0 x 4 = 31.52 Gbps, so as long as I run just one port on the NIC, I'll be fine, but if I want to LAG them both together for 50 Gbps, that becomes a limiting factor.
Do most X570 mobo support either of these configs? (I'm looking mainly at the Asus ROG/TUF offerings without WiFi)
I'm coming from Intel where I have been spoiled with 40 PCIe 3.0 lanes for about as far back as I can remember, so being limited to 24 lanes will take some getting used to.
Thanks!
1 16x PCIe 4.0 direct wired to AM4 (typically main PCIe 16x slot for GPU or split to 8x + 8x or 8x + 4x + 4x across 2 or 3 PCIe slots)
1 4x PCIe 4.0 direct wired to AM4 (typically M.2 slot for NVMe storage device)
1 4x PCIe 4.0 PCH (LAN, USB, SATA, etc)
I have the following that I want to plug into a X570 mobo:
RTX 3080 / 6900 XT (depending on which I can get my hands on first)
Mellanox MCX4121A dual 25 Gpbs NIC (8x PCIe 3.0)
Sabrent Rocket Q4 NVMe 4.0
I'm thinking that the best compromise would be to run those 3 devices as either:
GPU 8x PCIe 4.0 in PCIe 16x slot 1
NIC 8x PCIe 3.0 in PCIe 16x slot 2 (or 3)
NVMe 4x PCIe 4.0 in M.2 slot
Or: (not sure this will work since that effectively leaves no PCIe lanes for the PCH chipset)
GPU 16x PCIe 4.0 in PCIe 16x slot 1
NIC 4x PCIe 3.0 in PCIe 16x slot 2 (or 3)
NVMe 4x PCIe 4.0 in M.2 slot
Granted 4.0 is effectively twice as fast as 3.0, so maybe I won't take a performance hit by running the GPU @ 8x (assuming a PCIe 4.0 capable GPU of course).
PCIe 3.0 x 4 = 31.52 Gbps, so as long as I run just one port on the NIC, I'll be fine, but if I want to LAG them both together for 50 Gbps, that becomes a limiting factor.
Do most X570 mobo support either of these configs? (I'm looking mainly at the Asus ROG/TUF offerings without WiFi)
I'm coming from Intel where I have been spoiled with 40 PCIe 3.0 lanes for about as far back as I can remember, so being limited to 24 lanes will take some getting used to.
Thanks!
8 Slot Nvme Game
An NVMe drive is also an SSD, but instead of connecting it via a SATA III cable, it plugs directly into the motherboard via the M.2 PCIe slot, or into a traditional PCIe slot using an M.2 PCIe. De casibus virorum illustrium.
8 Slot Nvme
- The system itself is a 2U 24-bay NVMe chassis that uses 25GbE for its cluster interconnect. The higher-end DM7100F utilizes 100GbE. Alongside these chassis is the DM240N storage expansion shelf as well as the ability to utilize cold storage tiering to move cold data off of the NVMe array and push it to more economical disk storage or the cloud.
- This elegant solution allows high-computing enthusiasts to add four M.2 NVMe SSDs in a single PCIe x8 slot, thus keeping PCIe x16 slots free. When the M.2 NVMe SSDs are configured in RAID 0 (striped), the adapter provides up to four times the performance of a single M.2 NVMe SSD. The M.2 NVMe SSDs can also be configured in RAID 1 (mirrored) for.