Difference between revisions of "Procedure Infrastructure Deployment"

From D4Science Wiki
Jump to: navigation, search
(gCube Resources)
 
(8 intermediate revisions by 3 users not shown)
Line 1: Line 1:
 
__TOC__
 
__TOC__
Different deployment procedures apply for gCube, gLite/UMD, Hadoop  and Runtime Resources.  
+
Different deployment procedures apply for gCube, Hadoop  and Runtime Resources.  
 
+
  
 
== gCube Resources ==
 
== gCube Resources ==
Line 15: Line 14:
  
 
# gHN - The gHN distribution is available from the [http://www.gcube-system.org/ gCube ] website. The [https://gcore.wiki.gcube-system.org/gCube/index.php/Administrator_Guide Administrator Guide] provides detailed information about the gHN installation process.
 
# gHN - The gHN distribution is available from the [http://www.gcube-system.org/ gCube ] website. The [https://gcore.wiki.gcube-system.org/gCube/index.php/Administrator_Guide Administrator Guide] provides detailed information about the gHN installation process.
#SmartGears - The SmartGears distribution is available from the [http://www.gcube-system.org/ gCube ] website. The [https://gcube.wiki.gcube-system.org/gcube/index.php/SmartGears_gHN_Installation Smargears installation guide] provides information about the Smartgears installation process
+
#SmartGears - The SmartGears distribution is available from the [http://www.gcube-system.org/ gCube ] website. The [https://wiki.gcube-system.org/index.php/SmartGears_gHN_Installation Smargears installation guide] provides information about the Smartgears installation process
 
# gCube Service - gCube services are installed when new VOs/VREs are deployed. Check the [[Procedure VO Creation|VO Creation]] and [[Procedure VRE Creation|VRE Creation]] procedures.
 
# gCube Service - gCube services are installed when new VOs/VREs are deployed. Check the [[Procedure VO Creation|VO Creation]] and [[Procedure VRE Creation|VRE Creation]] procedures.
  
Line 21: Line 20:
 
'''Upgrade'''
 
'''Upgrade'''
  
# gHN and SmartGears - The upgrade of gHNs is based on upgrade plans published in the [[Resources Upgrade|Resources Upgrade]] page. Upgrades are announced via the [mailto:wp5@imarine.research-infrastructures.eu WP5] mailing list.
+
# gHN and SmartGears - The upgrade of gHNs is based on upgrade plans published in the [[Resources Upgrade|Resources Upgrade]] page.
# gCube - The upgrade of gCube services is based on upgrade plans published in the [[Resources Upgrade|Resources Upgrade]] page. Upgrades are announced via the [mailto:wp5@imarine.research-infrastructures.eu WP5] mailing list.
+
# gCube - The upgrade of gCube services is based on upgrade plans published in the [[Resources Upgrade|Resources Upgrade]] page.
  
  
In order to coordinate  Installation and Upgrade activities the  [[Role Infrastructure  Manager|Infrastructure  Managers]] use the [https://issue.imarine.research-infrastructures.eu/ iMarine TRAC] . For each activity the  [[Role Infrastructure  Manager|Infrastructure  Managers]] should open a TRAC ticket of type '''infrastructure''' describing the activity to perform and assign it to a [[Role Site Manager|Site Managers]] with a '''Due Date'''.
+
In order to coordinate  Installation and Upgrade activities the  [[Role Infrastructure  Manager|Infrastructure  Managers]] use the [https://support.d4science.org/ Redmine system]. For each activity the  [[Role Infrastructure  Manager|Infrastructure  Managers]] should open a "D4Science Infrastructure" RedMine ticket describing the activity to perform and assign it to a [[Role Site Manager|Site Managers]] with a '''Due Date'''.
 
[[Role Site Manager|Site Managers]] responsible of the tasks when closing the ticket are supposed to fill the field '''Intervention Time''' with the time spent performing the task.  
 
[[Role Site Manager|Site Managers]] responsible of the tasks when closing the ticket are supposed to fill the field '''Intervention Time''' with the time spent performing the task.  
Tickets associated with installation and upgrades are also reported in the [[Resources Upgrade|Resources Upgrade]] page. More information are available on the [http://wiki.i-marine.eu/index.php/Procedure_Infrastructure_upgrade Infrastructure upgrade wiki]
+
Tickets associated with installation and upgrades are also reported in the [[Resources Upgrade|Resources Upgrade]] page. More information is available on the [http://wiki.d4science.org/index.php/Procedure_Infrastructure_upgrade Infrastructure upgrade wiki]
  
== UMD Resources ==
+
== Shiny(Proxy) apps ==
  
The UMD middleware is composed by several components providing different grid services for distributed computing and storage. The latest stable release is [http://repository.egi.eu/category/umd_releases/distribution/umd_1/ UMD 1.5]. This release is certified to run on [http://linux.web.cern.ch/linux/scientific5 CERN Scientific Linux 5]. All UMD components run on x86_64 architecture
+
* Shiny apps can be deployed in the infrastructure. One  [https://www.shinyproxy.io shinyproxy] cluster is available, running on the [https://docs.docker.com/engine/swarm/ Docker Swarm] cluster.
 +
* Shiny apps are docker containers that can be built following the shinyproxy guidelines at [https://www.shinyproxy.io/documentation/deploying-apps/].
  
UMD components are in general expected to run on dedicated machines.  However it maybe be possible to have some UMD components co-existing with other UMD/gCube nodes. For example the UMD  [http://repository.egi.eu/2011/07/11/worker-node-1-0-0/ Worker Node ] can be installed in the same machine of a gCube node.
+
'''Build and Installation'''
  
The UMD nodes of the D4Science Ecosystem run the following gLite components: Cream_CE, WN, DPM_SE, WMS, LB, VOMS, UI and Apel
+
A shiny app can be deployed in the D4Science Infrastructure in different ways, using the ''ShinyProxy App'' tracker in the  [https://support.d4science.org/ Redmine system]:
 +
* It can be a public app already available in [https://hub.docker.com Docker Hub] or any other public container registry. In this case, the image name and the ''run command'' are the only requirements. Docker Hub can automatically build and push images from a public repository, as described in its [https://docs.docker.com/docker-hub/builds/ builds] documentation. The general Docker Hub documentation is [https://docs.docker.com/docker-hub/ here]
 +
* '''WORK IN PROGRESS''' A build of a public image can be requested. For that activity, a public repository must be accessible from our [https://jenkins.d4science.org Jenkins] instance so that the process can be automated. The result container image will be uploaded into [https://hub.docker.com Docker Hub].
 +
* '''WORK IN PROGRESS''' A build of a private image can be requested. For that activity, a repository must be accessible from our [https://jenkins.d4science.org Jenkins] instance so that the process can be automated. The result container image will be uploaded into the D4Science's private registry.
  
'''Installation'''
+
== Docker containers ==
 
+
The default installation method for SLC5 packages is the [http://yum.baseurl.org/ YUM] tool. All gLite components have a YUM meta-packages associated.
+
 
+
The configuration of UMD nodes is performed by a set of shell scripts provided by the [https://twiki.cern.ch/twiki/bin/view/LCG/YaimGuide400 YAIM] framework . The provided configuration scripts can be used by [[Role Site Manager|Site Managers]] with no need for in-depth knowledge of specific middleware configuration details. They must adapt some configuration files, according to provided examples. The resulting configuration is a default site configuration.
+
 
+
Detailed instructions about the installation of UMD can be found in the [http://repository.egi.eu/category/umd_releases/distribution/umd_1/ UMD Installation Guide]. iMarine specific configuration examples can be found [[Nodes_Deployment_Configurations|here]].
+
 
+
 
+
'''Upgrade'''
+
  
Upgrades in UMD are done on a per component basis. Each upgrade is associated with one web page containing the details of the upgrade and the list of affected components. All updates are announced via the EGI [https://cic.gridops.org CIC Portal] broadcast tool and are requested on a regular basis. Sites are asked to keep their installations up-to-date with respect to the latest update released.
+
A [https://docs.docker.com/engine/swarm/ Docker Swarm] cluster is available to deploy and run Docker containers. Only Docker containers are supported at this time.
  
Detailed instructions about the upgrade of gLite can be found in the [http://repository.egi.eu/category/umd_releases/distribution/umd_1/ UMD Installation Guide].
+
'''Build and Installation'''
  
 +
A container can be deployed in the D4Science Infrastructure in different ways, using the ''Docker Image'' tracker in the  [https://support.d4science.org/ Redmine system]:
 +
* It can be a public container already available in [https://hub.docker.com Docker Hub] or any other public container registry. Docker Hub can automatically build and push images from a public repository, as described in its [https://docs.docker.com/docker-hub/builds/ builds] documentation. The general Docker Hub documentation is [https://docs.docker.com/docker-hub/ here].
 +
*  '''WORK IN PROGRESS'''  A build of a public image can be requested. For that activity, a public repository must be accessible from our [https://jenkins.d4science.org Jenkins] instance so that the process can be automated. The result container image will be uploaded into [https://hub.docker.com Docker Hub] and then deployed into the Swarm cluster.
 +
*  '''WORK IN PROGRESS'''  A build of a private image can be requested. For that activity, a repository must be accessible from our [https://jenkins.d4science.org Jenkins] instance so that the process can be automated. The result container image will be uploaded into the D4Science's private registry and then deployed int the Swarm cluster.
  
 +
The image name, the replica factor and, where appropriate, the external configuration and storage requirements must be specified in the request.
 +
 
== Hadoop and Runtime Resources==
 
== Hadoop and Runtime Resources==
  
Hadoop and Runtime Resources installation and upgrades due to the diverse nature of Service and installation type does not follow a predefined installation or upgrade procedure, but as for the gCube Resources each action is associated with a TRAC  ticket of type infrastructure where [[Role Site Manager|Site Managers]]  have to report the Intervention Time spent.
+
Hadoop and Runtime Resources installation and upgrades due to the diverse nature of Service and installation type does not follow a predefined installation or upgrade procedure, but as for the gCube Resources each action is associated with a Redmine ticket where [[Role Site Manager|Site Managers]]  have to report the Intervention Time spent.

Latest revision as of 17:58, 23 September 2021

Different deployment procedures apply for gCube, Hadoop and Runtime Resources.

gCube Resources

The gCube nodes of the D4ScienceAn e-Infrastructure operated by the D4Science.org initiative. Infrastructure can be deployed in 32 and 64 bits machines and supports several Linux distributions. It has been tested on CERN Scientific Linux, RedHat Enterprise, Ubuntu, and Fedora.

A gCube node of the D4ScienceAn e-Infrastructure operated by the D4Science.org initiative. Infrastructure is composed by two main constituents:

  1. A base gHN-distribution or SmartGears distribution Managed locally by Site Managers;
  2. gCube services running on the gHN or on the Smartgears container. Managed remotely by VO Admins and the VRE Managers.


Installation

  1. gHN - The gHN distribution is available from the gCube website. The Administrator Guide provides detailed information about the gHN installation process.
  2. SmartGears - The SmartGears distribution is available from the gCube website. The Smargears installation guide provides information about the Smartgears installation process
  3. gCube Service - gCube services are installed when new VOs/VREs are deployed. Check the VO Creation and VRE Creation procedures.


Upgrade

  1. gHN and SmartGears - The upgrade of gHNs is based on upgrade plans published in the Resources Upgrade page.
  2. gCube - The upgrade of gCube services is based on upgrade plans published in the Resources Upgrade page.


In order to coordinate Installation and Upgrade activities the Infrastructure Managers use the Redmine system. For each activity the Infrastructure Managers should open a "D4ScienceAn e-Infrastructure operated by the D4Science.org initiative. Infrastructure" RedMine ticket describing the activity to perform and assign it to a Site Managers with a Due Date. Site Managers responsible of the tasks when closing the ticket are supposed to fill the field Intervention Time with the time spent performing the task. Tickets associated with installation and upgrades are also reported in the Resources Upgrade page. More information is available on the Infrastructure upgrade wiki

Shiny(Proxy) apps

  • Shiny apps can be deployed in the infrastructure. One shinyproxy cluster is available, running on the Docker Swarm cluster.
  • Shiny apps are docker containers that can be built following the shinyproxy guidelines at [1].

Build and Installation

A shiny app can be deployed in the D4ScienceAn e-Infrastructure operated by the D4Science.org initiative. Infrastructure in different ways, using the ShinyProxy App tracker in the Redmine system:

  • It can be a public app already available in Docker Hub or any other public container registry. In this case, the image name and the run command are the only requirements. Docker Hub can automatically build and push images from a public repository, as described in its builds documentation. The general Docker Hub documentation is here
  • WORK IN PROGRESS A build of a public image can be requested. For that activity, a public repository must be accessible from our Jenkins instance so that the process can be automated. The result container image will be uploaded into Docker Hub.
  • WORK IN PROGRESS A build of a private image can be requested. For that activity, a repository must be accessible from our Jenkins instance so that the process can be automated. The result container image will be uploaded into the D4ScienceAn e-Infrastructure operated by the D4Science.org initiative.'s private registry.

Docker containers

A Docker Swarm cluster is available to deploy and run Docker containers. Only Docker containers are supported at this time.

Build and Installation

A container can be deployed in the D4ScienceAn e-Infrastructure operated by the D4Science.org initiative. Infrastructure in different ways, using the Docker Image tracker in the Redmine system:

  • It can be a public container already available in Docker Hub or any other public container registry. Docker Hub can automatically build and push images from a public repository, as described in its builds documentation. The general Docker Hub documentation is here.
  • WORK IN PROGRESS A build of a public image can be requested. For that activity, a public repository must be accessible from our Jenkins instance so that the process can be automated. The result container image will be uploaded into Docker Hub and then deployed into the Swarm cluster.
  • WORK IN PROGRESS A build of a private image can be requested. For that activity, a repository must be accessible from our Jenkins instance so that the process can be automated. The result container image will be uploaded into the D4ScienceAn e-Infrastructure operated by the D4Science.org initiative.'s private registry and then deployed int the Swarm cluster.

The image name, the replica factor and, where appropriate, the external configuration and storage requirements must be specified in the request.

Hadoop and Runtime Resources

Hadoop and Runtime Resources installation and upgrades due to the diverse nature of Service and installation type does not follow a predefined installation or upgrade procedure, but as for the gCube Resources each action is associated with a Redmine ticket where Site Managers have to report the Intervention Time spent.