PMEM

From Grid5000
Jump to navigation Jump to search

Some nodes of Grid'5000 feature the new Persistent Memory technology. As of writing this page, the troll cluster in Grenoble is equipped.

Forewords

This Persistent Memory technology is known by many different names, e.g.

  • nvdimm (generic term, nvdimm-N = battery backed DRAM, nvdimm-P...)
  • SCM (storage class memory)
  • PMM/PMEM

In the rest of this document, we'll use the PMEM acronym.

The current available PMEM technology available in Grid'5000 is Intel's Optane DC Persistent Memory. Other vendors may provide PMEM in the future (IBM, HPE Memristor ?). PMEM has been also available for tests in emulators such as qemu for a long time.

This technology consists in DIMMs (just like DRAM) but offering a different set of characteristics:

  • It fills the gap between memory and storage: RAM <x10< PMEM <x100< high-end NVMe SSD in terms of latency
  • Persistence: can be used as (persistent) memory or filesystem on steroids
  • Byte addressable, zero-copy memory mapping
  • No energy consumption when idle, but more than RAM when used
  • Lower price per GB compared to DRAM, larger memory sizes than DRAM

This technology is not to be confused with the generic NVRAM term or the NVMe storages (SSD disk drives on top of PCIe).

Intel's PMEM settings

Intel's PMEM can be configured in 2 modes:

Memory
  • Just more RAM, no persistence. DRAM serves as cache (it disappears for the operating system viewpoint).
App direct
  • Many choices of configuration:
    • DIMMs interleave option in the region (change needs reboot)
    • region splits in namespaces (change may need reboot)
    • sector, fsdax, devdax, kmem (kmem not available before Linux 5.1)
Mix mode
  • It is also possible to allocate part of the memory to Memory mode and part of it to App Direct

In order to change the configuration (e.g. from Memory mode to App Direct mode, or vice versa), a reboot of the machine is needed.

Grid'5000 setup for experimentation

The choice in Grid'5000 has been to configure PMEM in Memory mode by default. That means that the PMEM is in Memory mode (it appears just like more RAM) in the Grid'5000 default environment (when not deploying).

Kadeploying allows to experiment with the App Direct mode. We encourage users who wants to experiments with the App direct mode to deploy a very recent system (e.g. Debian testing), in order to benefit from the latest support for PMEM.

To that purpose, jobs need to be of the deploy type, and kadeploy must be used:

Terminal.png fgrenoble:
oarsub -p "cluster='troll'" -t deploy -I

Then:

Terminal.png fgrenoble:
kadeploy3 -e debiantesting-x64-min -f $OAR_NODEFILE -k

Once a node is deployed, one can connect to it as root, install the PMEM software and possibly change the configuration and reboot to apply it.

The PMEM software are:

  • ipmctl: tool to change the config of Intel's PMEM (switch mode, etc.)
  • ndctl: tool to configure PMEM when in App Direct mode
  • daxctl: tool to configure the PMEM direct access (dax)

Install in Debian testing as follows:

Terminal.png troll-2:
apt install ipmctl ndctl daxctl

See the man pages or external documentations (see the references section) of the tools to use them.

For instance to change to App Direct mode, with DIMMs interleaved, one can run:

Terminal.png troll-2:
ipmctl create -goal MemoryMode=0

And then reboot.

Reboot time of the machine is pretty long (~ 10 minutes), so be patient. You might want to look at the console to follow the progress:

Terminal.png fgrenoble:
kaconsole3 -mtroll-2
Important notes
  • Please mind that when a job is terminated, the nodes of the job are automatically reconfigured to the default mode of operation, that is Memory mode.
  • Please mind that sudo-g5k is of NO help to experiment with the App Direct mode, since rebooting the node after changing the configuration will terminate the job, and switch it back to Memory mode. Using the App Direct mode requires kadeploying.

References