• Re: ssd6202a RAID card and Clonezilla?

    From Paul@3:633/10 to All on Sat Nov 1 05:16:58 2025
    On Sat, 11/1/2025 4:14 AM, T wrote:
    Hi All,

    Anyone know if this RAID card will work with clonezilla.
    I know most will not.

    https://www.highpoint-tech.com/product-page/ssd6202a

    Their web site states:

    ÿÿÿ HighPoint Industrial and Virtualization solutions feature integrated host, hardware-level RAID 0, 1 and JBOD capability. RAID arrays hosted by SSD6200 series controllers are fully transparent and will be automatically recognized by all current VM and OS platforms as available single-drives. No drivers, special procedures or software application is required.

    Yours in Confusion,
    -T

    https://www.techpowerup.com/327842/highpoint-unveils-new-ssd6202a-low-profile-aic-nvme-boot-drive-solution

    "Same Marvell 88NR41 controller as in Western Digital?s WD AN1500."

    # Google gives me the wrong URL, but you get the idea.

    https://www.marvell.com/products/system-solutions/nvme-accelerators.html

    "By extending Marvell?s SATA/SAS legacy controllers' feature set into the NVMe space" # Not according to CoPilot, oh well
    # A mis-leading blurb.

    CoPilot Question
    ----------------

    How does a marvell 88NR41 NVMe RAID devices present itself
    to the OS, when the hardware is being discovered,
    and the card is operated in RAID1 mode or JBOD mode ?

    CoPilot answer
    --------------

    **In RAID1 mode, the Marvell 88NR41 NVMe RAID device presents a single virtual disk to the OS;
    in JBOD mode, each physical NVMe SSD is exposed individually.**

    Here?s how the Marvell 88NR41 (often referenced as 88NR2241 in product literature)
    behaves during hardware discovery and OS enumeration:

    ### Device Presentation to the OS

    #### **RAID1 Mode**
    - **Virtualization Layer:** The controller aggregates two physical NVMe SSDs into a single *virtual disk*.
    - **OS View:** The operating system sees **one logical NVMe device**, representing the mirrored pair.
    - **Driver Role:** Marvell?s NativeRAID firmware and driver stack handle the RAID logic internally, abstracting redundancy from the OS.
    - **Boot Support:** This virtual disk can be used as a boot device, especially in server environments where redundancy is critical.

    #### **JBOD Mode**
    - **Direct Mapping:** Each NVMe SSD is passed through directly to the OS without abstraction.
    - **OS View:** The OS enumerates **each physical NVMe drive independently**, as if they were connected directly via PCIe.
    - **Use Case:** Ideal for software RAID setups or when individual drive access is needed for performance or flexibility.

    ### Discovery Process

    - **PCIe Enumeration:** The card appears as a PCIe device with multiple endpoints depending on configuration.
    - **NVMe Controller Exposure:** In RAID1, the virtual disk is exposed via a virtual NVMe controller. In JBOD, each SSD?s native controller is visible.
    - **CLI Tools:** Marvell provides command-line utilities to inspect physical and virtual disk mappings, useful for verifying RAID status and drive health.

    ### Additional Notes

    - **DRAM-less Architecture:** The 88NR41 uses a DRAM-less design, relying on host memory and low-latency NVMe transactions.
    - **RAID Modes Supported:** RAID0, RAID1, RAID10, and JBOD are all supported, allowing flexible deployment strategies.
    - **Target Use Cases:** Boot OS replication, hyperconverged infrastructure, and high-availability storage systems.

    Would you like a breakdown of how the virtual disk metadata is structured or how the firmware handles failover in RAID1? I can dig into that next.

    ******************************************************************************

    So then, it's not as fancy as presented. It uses an NVMe driver, and
    for Clonezilla, if Clonezilla can handle an NVMe drive, then it can
    handle that thing.

    Normally, when we analyze these things, we extract info about the implementation,
    from the RAID modes supported. You'll notice for a lot of this crap, it
    never goes past RAID0/1/10. RAID5/6 requires XOR for read/write and the chip runs hot while doing so. The four drive version of the card is still limited
    on RAID modes (does not magically do RAID5). In the four drive card case,
    I/O rates per sled are cut in half (2*1GB/sec lanes).

    The card in question is PCIe Rev3, which is on the order of 3500MB/sec or so. In RAID1 mirror mode, you're "looking at one stick". If it really does
    use all eight lanes in the PCIe switch, then with two JBOD drives it should
    be able to pull 7000MB/sec (like if you used a softraid on top of it maybe).

    https://www.digchip.com/datasheets/parts/datasheet/2/277/88NR2241B.php

    "No custom drivers are necessary as the accelerator supports native NVMe in-box drivers
    and is compatible with most popular operating systems today such as VMware ESXi,
    Windows Server and popular Linux distributions."

    Another citation in the CoPilot here:

    https://filedn.com/lG3WBCwKGHT7yNuTsFCwXy0/HighPoint-Download/Document/Guide/SSD6200/HighPoint%20SSD62xx%20Series%20CLI%20User%20Guid.pdf

    This suggests that unless otherwise stated, it won't boot on my 4930K X79 because
    that has no NVMe INT13 support, and I can only use NVMe as data drives seen at OS time.
    It would work on the newer Zen3 boxes, which all have NVMe sleds onboard. It might
    not be as good as some other card we discussed, that seemed to have a boot EEPROM
    on it to make that product visible to the BIOS.

    At least CoPilot confirms that some of the marketing weasel words are not correct.
    It looks approximately like an NVMe stick.

    Just a guess,
    Paul

    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)