''''''''''''''''''''''''''''''''''''''''''' Rspec documents supported by Grid'5000's AM ''''''''''''''''''''''''''''''''''''''''''' When accessing Grid'5000 with the `Grid'5000 AMv3 API`_, you'll exchange `rspec documents`_ with the API. This page details those generated and undertood by Grid'5000's AM. ============================================== Understanding status information in GetVersion ============================================== In addition to the classic answer generated by geni-tools' implementation, The Grid'5000 aggregate manager pulishes a ``status_info`` field with links to ways of staying informed of maintenance events. .. code-block:: json { "urn": "urn:publicid:IDN+am.grid5000.fr+authority+am", "geni_api_versions": { "3": "https://am.grid5000.fr:443/" }, "geni_credential_types": [ { "geni_version": "3", "geni_type": "geni_sfa" } ], "geni_api": 3, "geni_am_code": "2.11", "geni_ad_rspec_versions": [ { "schema": "http://www.geni.net/resources/rspec/3/ad.xsd", "extensions": [ ], "namespace": "http://www.geni.net/resources/rspec/3", "type": "GENI", "version": "3" } ], "geni_am_type": "gcf", "status_info": [ { "media-type": "text/calendar", "url": "https://www.grid5000.fr/status/upcoming.ical" }, { "media-type": "text/html", "url": "https://www.grid5000.fr/status/" }, { "media-type": "application/json", "url": "https://www.grid5000.fr/status/upcoming.json" }, { "media-type": "application/rss+xml", "url": "https://www.grid5000.fr/status/upcoming.atom" } ], "geni_request_rspec_versions": [ { "schema": "http://www.geni.net/resources/rspec/3/request.xsd", "extensions": [ ], "namespace": "http://www.geni.net/resources/rspec/3", "type": "GENI", "version": "3" } ] } ==================================== Understanding the advertisment rspec ==================================== Further along you can read the advertisement rspec used by unit tests on the code of the Aggregate Manager. Its most noticeable aspects are comment briefly here * There is only one aggregate manager for all Grid'5000 sites. Nodes in different sites have different component_managers. In the example, you have nodes in Nancy (`component_manager_id` is `urn:publicid:IDN+am-devel.grid5000.fr:nancy+authority+am`) and Rennes (`component_manager_id` is `urn:publicid:IDN+am-devel.grid5000.fr:rennes+authority+am`). All nodes managed by the same component manager share the same `location` information, in particular latitude and longitude. * The last part of the `component_id` of nodes is path in Grid'5000's API to the description of the nodes from which the information about the node was generated. Depending on your credentials, the API can be reached at `https://api.grid5000.fr/ `_ or `https://public-api.grid5000.fr/ `_, and will output `json` documents. * In general, you should expect that for a given component_manager (a site in Grid'5000 terminology), the `disk_image` list for all nodes to be the same, as the technical team only publishes images that work on all nodes of a site. Moreover, differences between sites should only be temporary, as deployment on all sites is not always synchronized. * With the `raw-pc` sliver_type, the document publishes the list of images that can be deployed on nodes and for which root access is given on the sliver. * With the `cpuset` sliver_type, the document advertises the image users can get access to without deploying, meaning you don't get root privileges on the sliver. This last option is similar to the type of access a user would get on an HPC facility through a batch scheduler. It is one of the images in the list available with the `raw-pc` sliver_type, so if someone needs root access to that image (or to a version used previously), it is accessible. * The semantics of the `availability` of a node can change depending on the options given to `ListResources`. It either means the node has no current user (the default), or that the node is currently used in best-effort mode. Best-effort mode is used when the experiment or workload running on a node can be interrupted. The node is therefore available to other users, but if none has use of the node, it will run low priority workloads. * The name given as a node's `hardware_type` is composed of the name of the site it is running at and of a cluster name. In the Grid'5000 context, a cluster is a set of mostly homogeneous nodes. They are shortly described at the end of the advertisement rspec as part of the `hwinfo`_ extension. If an experiment's result might depend on disk model or on the manufacturer of the memory modules installed, it is recommended the experimenter looks up the detailed description of nodes available from Grid'5000's public API, as referenced in the `application/json` media-type description of hardware_types. .. literalinclude:: ../../test/fixtures/advertisement.rspec :language: xml =============================== Understanding the request rspec =============================== Allocation requests are performed through a request rspec. The recognized request patterns are listed here. * The AM will only parse node elements with a `component_manager_id` starting using the form `urn:publicid:IDN+am.grid5000.fr:+authority+am`. The site is primarily used in cases where the request does not specify a requested node or a requested cluster. In cases where the requested node or cluster does not match with the site specified in the `component_manager_id`, the AM will ignore the site indication and honor the node or cluster request on their proper site. * The AM will ignore contradictory information between the `component_manager_id`, the `component_id`, and the `hardware_type`. If a `component_id` is provided the AM will ignore the values `component_manager_id` and `hardware_type`. Conversely if only an `hardware_type` is provided the AM will ignore the values `component_manager_id`. .. code-block:: xml ================================ Understanding the manifest rspec ================================ Manifest rspec are generated by the AM in response Allocate, Provision and Describe calls. * The generated rspec will not correct inconsistencies between `component_manager_id`, and `component_id`. Instead reflecting the information provided in the request. * `hardware_type` elements are not generated in manifest rspec. * A `services` element will provide ssh connection information. The `proxy` sub-element provides the ssh proxy address and the node ssh address. Because grid'5000 nodes are behind a firewall you will need to connect via access.grid5000.fr. Users are expected to connect using the private key from their authoritie's user certificate. * The `component_id` attribute of `interface` is used to display the name of the interface put in a vlan. * The `sliver_id` attribute of an `interface` element is the same as the one of the `link` element in which the interface is involved. * Fake nodes used for stitchings do not appear in the manifest, simillarly their interfaces are removed from the link element. * If relevent a stitching element is added to describe which link has been stitched to which vlan. The vlantag for the stitched link is also set to the stitching's tag instead of Grid'5000 internal vlantag. .. code-block:: xml l2sc ethernet 1290 1290 false null .. _Grid'5000 AMv3 API: index.html .. _rspec documents: https://fed4fire-testbeds.ilabt.iminds.be/asciidoc/rspec.html .. _hwinfo: hwinfo.html