G5k-checks

From Grid5000
Jump to navigation Jump to search


Description

g5k-checks

  • g5k-checks is expected to be integrated into the production environment of the Grid'5000 computational nodes. It gathers a collection of programs which check that a node meets several basic requirements before it declares itself as available to the OAR server.
  • This lets the admins enable some checkers which may be very specific to the hardware of a cluster.

g5k-checks executes at boot time in two phases:

  • Phase 1
    • An init script, /etc/init.d/g5k-checks, runs all checkers that must run early enough in the boot process.
    • They are listed in the variable CHECKS_FOR_INIT in the configuration file.
    • Then it enables all checkers listed in the variable CHECKS_FOR_OAR for Phase 2.
  • Phase 2
    • This phase strongly relies on the check mechanism provided by OAR and the oar-node configuration file (/etc/default/oar-node for deb distros, /etc/sysconfig/oar-node for rpm ones).
    • The oar-node flavour of OAR installation embeds an hourly cron job, oarnodecheckrun, which runs all executable files stored in /etc/oar/check.d/. Then the server periodically invokes remotely oarnodecheckquery which will return with status 0 if and if only there are some log files in a given OAR directory. So if a checker in /etc/oar/check.d/ finds something wrong, it simply has to create a log file in that directory.
    • The version of /etc/(default|sysconfig)/oar-node that g5k-checks installs runs both oarnodecheckrun and oarnodecheckquery scripts. If the latter fails, then the node is not ready to start, and it loops on running those scripts until either oarnodecheckquery returns 0 or a timeout is reached. If the timeout is reached, then it does not attempt to declare the node as "Alive".
    • During Phase 1, the enabling of a checker simply turns out to adding a symbolic link in /etc/oar/check.d to its "oar-node driver". We name this a short script file which interfaces the core checker to the OAR check mechanism.

At any moment when the node is running g5k-checks may be called either to disable or to enable checks. This is expected to be used by OAR prologue and epilogue:

  • /etc/init.d/g5k-checks stop: disable OAR checks
  • /etc/init.d/g5k-checks start: enable OAR checks for oarnodecheckrun
  • /etc/init.d/g5k-checks startrun: enable OAR checks for oarnodecheckrun and run once the couple oarnodecheckrun/oarnodecheckquery, without waiting for one hour to be passed.

Basically, the OAR prologue should call "stop", while the epilogue should call "startrun".

  • At installation time, g5k-checks configures the local syslog daemon: it first looks for a free <n> such as local<n>.alert and local<n>.warning selectors are not used, and then defines them with the action "@syslog". If there is no "syslog" host on the local network, then it defaults to writing messages in the local syslog file.
  • The checkers use the local<n> facility to report only important messages. They use a local log file for debugging messages. Please see section 3.2 for further details.

g5k-parts

g5k-parts is designed to run at both phases of g5k-checks (see above).

  • In Phase 1, g5k-parts validates the partitioning of a Grid'5000 computational node against the G5K Node Storage convention: all partitions but /tmp are primary, and /tmp is a logical partition inside the only extended partition.
  • It first compares /etc/fstab with its backup generated at deployment time. When errors are found at this level, /etc/fstab is reset and the machine reboots.
  • Then for every partition given on the command line, it first matches its geometry on the hard drive with the partition layout saved at deployment time. It may perform several other checks (e.g. the formatting of the partition) depending on the partition role. It attempts to fix errors, which it reports to the syslog system. Sometimes, g5k-parts cannot fix the error (e.g. hard drive errors); it can only prevent the node from declaring itself as alive to the OAR server, with a simple stamp file (not executable !) whose existence is tested by the oar-node driver of g5k-parts in Phase 2:

/etc/oar/check.d/g5k-parts-init-failed

  • There is a special processing for NFS shares which the node must mount at boot time. Sometimes such mounts fail and this is not due to the node itself but to the NFS server(s) or the network connection. To avoid blocking the boot process (and have kadeploy3 fail because of a timeout when it should not), g5k-parts tries every mount only once. Subsequent tries are done in Phase 2.
  • In Phase 2, the oar-node driver of g5k-parts calls the script with a single argument, "nfs", in order to limit the checks to the NFS shares.