Skip to content

R430 Hardware Upgrade

Dual CPU Upgrade + Nvidia T400 GPU Installation

Detail Value
Server Dell PowerEdge R430 (1U)
Current CPU Intel Xeon E5-2640 v3 (8C/16T)
Target CPUs 2 × Intel Xeon E5-2680 v4 (SR2N7)
Target config 28 cores / 56 threads total
GPU Nvidia T400 (Turing NVENC, 30W)
PSU config Dual 550W hot-plug redundant
Hypervisor Proxmox VE
Plex runtime LXC container

Part 1: Pre-maintenance checklist

Gather all materials and prepare the environment before starting. The entire procedure can be completed in a single maintenance window of approximately 45–60 minutes for hardware, plus 30–45 minutes for software configuration.

1.1 Parts and materials

  • [ ] 2 × Intel Xeon E5-2680 v4 CPU (S-Spec: SR2N7) — pulled/refurb, ~$30–50 AUD each on eBay
  • [ ] Nvidia T400 GPU (low-profile) — refurbished, LP bracket fitted
  • [ ] Second heatsink (Dell P/N 02FKY9) — for CPU2 socket; ~$15–30 AUD on eBay
  • [ ] Thermal paste — Arctic MX-4, Noctua NT-H1, or similar non-conductive compound
  • [ ] Isopropyl alcohol (90%+) — for cleaning old thermal compound
  • [ ] Lint-free cloths or coffee filters — do not use paper towels
  • [ ] Anti-static wrist strap — or regularly ground yourself on the chassis
  • [ ] Torx T30 screwdriver — for heatsink screws
  • [ ] Phillips #2 screwdriver — for riser bracket / general

1.2 Pre-flight checks

  • [ ] Verify both CPUs show S-Spec SR2N7 printed on the IHS (metal heat spreader)
  • [ ] Update BIOS to latest version via iDRAC (Lifecycle Controller → Firmware Update)
  • [ ] Update iDRAC firmware (latest 2.x release for iDRAC8)
  • [ ] Record current BIOS settings (screenshot or note any custom settings)
  • [ ] Backup Proxmox config (vzdump or copy /etc/pve/)
  • [ ] Gracefully shut down all VMs/CTs (verify clean shutdown in Proxmox UI)
  • [ ] Check current chassis fan count — 5 fans needed for dual-CPU; note empty bay
  • [ ] Check PIB for second CPU power cable (look behind PSU bays while chassis is open)

Warning

Power down the server fully and disconnect both power cables before opening the chassis. Wait 30 seconds for capacitors to discharge. The PSU standby LED on the motherboard should be off.


Part 2: Hardware procedure

This section covers the GPU installation and dual CPU swap in a single maintenance window. The order is: open chassis → install GPU first (easier access) → swap CPU1 → install CPU2 with second heatsink → reassemble.

2.1 Open the chassis

  1. Place the server on a stable, flat, non-conductive surface with good lighting.
  2. Disconnect both power cables from the rear PSUs.
  3. If rack-mounted, slide the server out on its rails and disconnect from rails, or work on the extended rails if they lock.
  4. Rotate the latch release lock on the top cover counter-clockwise to the unlocked position.
  5. Press the cover release latch and slide the cover toward the rear of the system.
  6. Lift the cover away from the system.

Note

While the chassis is open, inspect the fan layout. Count the fan modules — if there are only 4 and you are going dual-CPU, note the empty 5th fan bay position. Also visually trace the Power Interface Board (PIB) behind the PSU bays and check if a second CPU power cable is already routed to the motherboard. Take a photo for reference.

2.2 Install the Nvidia T400 GPU

The T400 is a low-profile, single-slot card drawing approximately 30W from the PCIe slot. No auxiliary power cable is needed.

  1. Remove the cooling shroud (plastic air baffle) by lifting it straight up. Set aside.
  2. Locate the PCIe riser assembly. The R430 uses a riser card that slots vertically into the motherboard and holds expansion cards horizontally.
  3. Lift the riser assembly out of the chassis by pulling straight up from the blue retention latch.
  4. If a PCIe slot filler bracket is installed in the slot you intend to use, remove the screw and slide out the filler.
  5. Align the T400 card edge connector with the PCIe slot on the riser and press firmly until fully seated.
  6. Secure the T400 bracket with the retention screw.
  7. Do not reinstall the riser into the chassis yet — leave it aside while you swap the CPUs. This gives better access to the CPU sockets.

Tip

Verify the T400 has its low-profile bracket fitted, not a full-height bracket. The R430 riser only accepts half-height / low-profile cards.

2.3 Swap CPU1 (E5-2640 v3 → E5-2680 v4)

Remove the existing E5-2640 v3

  1. Using the Torx T30 screwdriver, loosen the four captive heatsink screws in a diagonal (cross) pattern. Do not fully remove — loosen evenly to avoid uneven pressure.
  2. Once all four screws are loose, lift the heatsink straight up. If it resists, gently twist (do not pry) to break the thermal paste seal.
  3. Set the heatsink face-up on a clean surface so the thermal paste residue doesn't contaminate anything.
  4. Release the CPU socket levers: push the lever near the unlock icon first (open-first lever) down and out from under its tab, then do the same for the lever near the lock icon (close-first lever).
  5. Lift both levers to approximately 90 degrees.
  6. Lift the CPU load plate / processor shield.
  7. Carefully lift the E5-2640 v3 straight up out of the socket. Handle by edges only. Place in an anti-static bag or the new CPU's packaging.

Warning

Do not touch the socket pins. Do not touch the gold contact pads on the bottom of the CPU. Any contamination (oils, debris) can cause intermittent failures or permanent damage.

Clean and prepare

  1. Apply isopropyl alcohol to a lint-free cloth and clean the old thermal paste from the heatsink base plate. Wipe in one direction, not circular.
  2. Inspect the heatsink surface — it should be clean, smooth, and mirror-like.

Install the first E5-2680 v4

  1. Unpack the E5-2680 v4. If it is a pulled/refurb chip, clean any existing thermal paste from the IHS with isopropyl alcohol.
  2. Locate the pin-1 indicator triangle on both the CPU and the socket. Align them.
  3. Lower the CPU straight down into the socket. It should drop in with zero force. If there is any resistance, the alignment is wrong — lift out and recheck.
  4. Close the processor shield / load plate.
  5. Lower the close-first socket lever (near the lock icon) and push it under its tab.
  6. Lower the open-first socket lever (near the unlock icon) and push it under its tab.
  7. Apply thermal paste to the centre of the CPU IHS — a pea-sized dot (approximately 4mm diameter). The heatsink pressure will spread it evenly.
  8. Lower the heatsink straight down onto the CPU, aligning the screw holes.
  9. Tighten the four captive screws in a diagonal (cross) pattern, gradually and evenly. Alternate between screws — do not fully tighten one before starting the next.

2.4 Install the second E5-2680 v4 in CPU2 socket

Locate the CPU2 socket (to the right of CPU1 when facing the front of the chassis). It will have a protective plastic cap over it.

  1. Remove the socket protective cap from CPU2. Keep it safe in case you ever need to revert to single-socket.
  2. Release both socket levers in the same order as CPU1: open-first lever (unlock icon) first, then close-first lever (lock icon). Lift both to 90 degrees.
  3. Lift the processor shield.
  4. Align the second E5-2680 v4 with the socket keys (pin-1 triangle to socket triangle). Lower it in with zero force.
  5. Close the processor shield.
  6. Lower the close-first lever (lock icon) and push under its tab, then the open-first lever (unlock icon).
  7. Apply thermal paste to CPU2 using the same method as CPU1.
  8. Place the second heatsink (02FKY9) onto CPU2. Tighten screws diagonally.
  9. Connect the CPU2 power cable from the PIB to the motherboard CPU2 power header if not already connected.

Note

If your chassis only has 4 fan modules and there is an empty 5th fan bay, install a 5th fan module before going dual-CPU. The server may boot without it, but airflow across the second CPU will be inadequate under sustained load.

2.5 Reassemble

  1. Reinstall the PCIe riser assembly (now containing the T400) into the motherboard slot. Press down firmly until the blue retention latch clicks.
  2. Reinstall the cooling shroud / air baffle.
  3. Reinstall the top cover — slide forward until the latch clicks, then rotate the latch lock clockwise.
  4. Reconnect both power cables.

Part 3: Post-hardware verification

3.1 First boot checks

  • Power on the server. Watch the front panel LCD or connect via iDRAC virtual console.
  • The server should POST normally. It may display a message about a configuration change (new CPUs detected) — this is expected.
  • Enter BIOS (F2 during POST) and verify both processors show as E5-2680 v4, 14 cores each, 2.40 GHz.
  • Check that all populated DIMM slots are detected (CPU2 now controls DIMM slots A5–A8).
  • Run Dell system diagnostics (F10 during POST or via Lifecycle Controller) to verify both processors operate correctly.
  • Exit BIOS and allow Proxmox to boot.

3.2 Proxmox host verification

# Verify dual CPU and core count
lscpu

# Verify T400 is detected
lspci | grep -i nvidia

# Check CPU temperatures at idle (35–50°C typical)
sensors

# Check for GPU initialisation errors
dmesg | grep -i nvidia

Expected lscpu output: 2 sockets, 14 cores per socket, 56 threads total, model name E5-2680 v4.

3.3 IPMI fan fix for third-party PCIe card

Dell R430 servers aggressively spin fans when an unvalidated PCIe card is detected. Run this from the Proxmox host to disable the aggressive fan response.

Install ipmitool if not present:

apt install ipmitool

Disable aggressive third-party PCIe fan response:

ipmitool raw 0x30 0xce 0x00 0x16 0x05 0x00 0x00 0x00 0x05 0x00 0x01 0x00 0x00

Query the current setting:

ipmitool raw 0x30 0xce 0x01 0x16 0x05 0x00 0x00 0x00
  • Response ending in 01 00 00 = disabled (what you want)
  • Response ending in 00 00 00 = enabled (fans will be aggressive)

Note

This setting persists across reboots but may reset after a BIOS or iDRAC firmware update. If fans suddenly become aggressive after a firmware update, re-run the disable command.


Part 4: Software configuration

4.1 Install Nvidia drivers on Proxmox host

apt install pve-headers-$(uname -r)
apt update
apt install nvidia-driver nvidia-smi

After installation, reboot. Then verify:

nvidia-smi

If nvidia-smi returns an error, check that the nouveau driver is blacklisted:

echo "blacklist nouveau" >> /etc/modprobe.d/blacklist-nouveau.conf
update-initramfs -u

Then reboot again.

4.2 Enable Nvidia device persistence

nvidia-persistenced --persistence-mode
systemctl enable nvidia-persistenced

4.3 Patch NVENC session limit

The T400 may be subject to Nvidia's software-imposed concurrent NVENC session limit. The keylase/nvidia-patch removes this restriction.

git clone https://github.com/keylase/nvidia-patch.git
cd nvidia-patch
bash ./patch.sh

Verify the patch was applied:

bash ./patch.sh -c

Note

This patch must be re-applied every time you update the Nvidia driver. After a driver update, cd back into the nvidia-patch directory, git pull, and run bash ./patch.sh again. Keep the repo in a persistent location like /opt/nvidia-patch/.

Post-patch practical stream limits for the T400 (4GB VRAM):

Resolution Format Simultaneous streams
1080p H.264 transcode ~8–10
4K HDR → 1080p SDR With tone mapping ~2–3

The limiting factor post-patch is VRAM and NVENC engine throughput, not an artificial driver cap.

4.4 Pass GPU to Plex LXC container

Identify the Nvidia device major numbers on the host:

ls -la /dev/nvidia*
stat -c '%t' /dev/nvidia0

Edit the LXC container config (replace <CTID> with your Plex container ID):

nano /etc/pve/lxc/<CTID>.conf

Add the following lines (adjust major numbers to match your ls -la output):

lxc.cgroup2.devices.allow: c 195:* rwm
lxc.cgroup2.devices.allow: c 234:* rwm
lxc.cgroup2.devices.allow: c 509:* rwm
lxc.mount.entry: /dev/nvidia0 dev/nvidia0 none bind,optional,create=file
lxc.mount.entry: /dev/nvidiactl dev/nvidiactl none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-modeset dev/nvidia-modeset none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-uvm dev/nvidia-uvm none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-uvm-tools dev/nvidia-uvm-tools none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-caps/nvidia-cap1 dev/nvidia-caps/nvidia-cap1 none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-caps/nvidia-cap2 dev/nvidia-caps/nvidia-cap2 none bind,optional,create=file

Note

The cgroup device major numbers (195, 234, 509) vary between driver versions. Always verify with ls -la /dev/nvidia* on your specific host. Common mappings: 195 = nvidia core devices, 234 = nvidia-caps, 509 or 510 = nvidia-uvm. The optional flag handles devices that may not exist on all driver versions.

4.5 Install Nvidia libraries inside the LXC

The LXC container needs the matching Nvidia userspace libraries (same version as the host driver). Push the same .run installer into the container and run it with --no-kernel-module:

# From the Proxmox host
pct push <CTID> ./NVIDIA-Linux-x86_64-<VERSION>.run /root/NVIDIA-Linux-x86_64-<VERSION>.run

# Enter the container and install
pct enter <CTID>
chmod +x /root/NVIDIA-Linux-x86_64-<VERSION>.run
./NVIDIA-Linux-x86_64-<VERSION>.run --no-kernel-module

Verify from inside the container:

nvidia-smi

Note

The driver version inside the LXC must exactly match the host driver version. If you update the host driver, re-push and re-run the installer inside the container with --no-kernel-module.

4.6 Enable hardware transcoding in Plex

  • Open Plex web UI and navigate to Settings → Transcoder
  • Enable "Use hardware acceleration when available"
  • Enable "Use hardware-accelerated video encoding"
  • Set the Transcoder temporary directory to /dev/shm (uses RAM — reduces disk I/O)
  • Save and test by playing a file that requires transcoding from a remote client

Tip

To verify hardware transcoding is active, play a stream that requires transcoding and check the Plex dashboard — it should show (hw) next to the transcode status. Monitor GPU usage from the Proxmox host with: nvidia-smi -l 1


Part 5: Reference

Final system specifications

Spec Value
CPUs 2 × Intel Xeon E5-2680 v4 (SR2N7)
Total cores / threads 28 cores / 56 threads
CPU TDP 120W each, 240W combined
GPU Nvidia T400 (Turing NVENC, 30W)
Max DIMM slots 12 (6 per CPU)
Max RAM 384GB RDIMM / 768GB LRDIMM
PSU Dual 550W hot-plug redundant
Socket type LGA 2011-3 (Narrow ILM)

Power budget estimate

Component TDP / Draw Notes
2 × E5-2680 v4 240W 120W TDP each
Nvidia T400 30W Slot-powered
DDR4 RDIMMs (12 slots) ~60W ~5W per DIMM
Drives (8 × 2.5" SSD/HDD) ~40W ~5W each
Fans, motherboard, misc ~50W Estimate
Total estimated peak ~420W Well within 550W single PSU