Difference between revisions of "Procedure Infrastructure Deployment"

From D4Science Wiki
Jump to: navigation, search
(gLite Nodes)
 
(29 intermediate revisions by 4 users not shown)
Line 1: Line 1:
 
__TOC__
 
__TOC__
Different deployment procedures apply for gCube, gLite, and Hadoop nodes.  
+
Different deployment procedures apply for gCube, Hadoop and Runtime Resources.  
  
 +
== gCube Resources ==
  
== gCube Nodes ==
+
The gCube nodes of the D4Science Infrastructure can be deployed in 32 and 64 bits machines and supports several Linux distributions. It has been tested on [http://linuxsoft.cern.ch CERN Scientific Linux], [http://www.redhat.com/rhel RedHat Enterprise], [http://www.ubuntu.com Ubuntu], and [http://fedoraproject.org Fedora].
  
The gCube nodes of the D4Science Ecosystem run the gCube software as released by the iMarine WP7 activity. gCube can be deployed in 32 and 64 bits machines and supports several Linux distributions. It has been tested on [http://linuxsoft.cern.ch CERN Scientific Linux], [http://www.redhat.com/rhel RedHat Enterprise], [http://www.ubuntu.com Ubuntu], and [http://fedoraproject.org Fedora].
+
A gCube node of the D4Science Infrastructure is composed by two main constituents:
 
+
# A base gHN-distribution or SmartGears distribution Managed locally by [[Role Site Manager|Site Managers]];
A gCube node of the D4Science Ecosystem is composed by two main constituents:
+
# gCube services running on the gHN or on the Smartgears container. Managed remotely by [[Role VO Admin|VO Admins]] and the [[Role VRE Manager|VRE Managers]].
# A base gHN-distribution container. Managed locally by [[Role Site Manager|Site Managers]];
+
# gCube services running on the gHN. Managed remotely by [[Role VO Admin|VO Admins]] and the [[Role VRE Manager|VRE Managers]].
+
  
  
Line 15: Line 14:
  
 
# gHN - The gHN distribution is available from the [http://www.gcube-system.org/ gCube ] website. The [https://gcore.wiki.gcube-system.org/gCube/index.php/Administrator_Guide Administrator Guide] provides detailed information about the gHN installation process.
 
# gHN - The gHN distribution is available from the [http://www.gcube-system.org/ gCube ] website. The [https://gcore.wiki.gcube-system.org/gCube/index.php/Administrator_Guide Administrator Guide] provides detailed information about the gHN installation process.
 +
#SmartGears - The SmartGears distribution is available from the [http://www.gcube-system.org/ gCube ] website. The [https://wiki.gcube-system.org/index.php/SmartGears_gHN_Installation Smargears installation guide] provides information about the Smartgears installation process
 
# gCube Service - gCube services are installed when new VOs/VREs are deployed. Check the [[Procedure VO Creation|VO Creation]] and [[Procedure VRE Creation|VRE Creation]] procedures.
 
# gCube Service - gCube services are installed when new VOs/VREs are deployed. Check the [[Procedure VO Creation|VO Creation]] and [[Procedure VRE Creation|VRE Creation]] procedures.
  
Line 20: Line 20:
 
'''Upgrade'''
 
'''Upgrade'''
  
# gHN - The upgrade of gHNs is based on upgrade plans published in the [[Resources Upgrade|Resources Upgrade]] page. Upgrades are announced via the [mailto:wp5@imarine.research-infrastructures.eu WP5] mailing list.
+
# gHN and SmartGears - The upgrade of gHNs is based on upgrade plans published in the [[Resources Upgrade|Resources Upgrade]] page.
# gCube - The upgrade of gCube services is based on upgrade plans published in the [[Resources Upgrade|Resources Upgrade]] page. Upgrades are announced via the [mailto:wp5@imarine.research-infrastructures.eu WP5] mailing list.
+
# gCube - The upgrade of gCube services is based on upgrade plans published in the [[Resources Upgrade|Resources Upgrade]] page.
  
== gLite Nodes ==
 
  
The gLite middleware is composed by several components providing different grid services for distributed computing and storage. The latest stable release is [http://glite.web.cern.ch/glite/packages/R3.2 gLite 3.2]. This release is certified to run on [http://linux.web.cern.ch/linux/scientific5 CERN Scientific Linux 5]. All gLite components run on i386 architectures and some also support x86_64.
+
In order to coordinate  Installation and Upgrade activities the  [[Role Infrastructure  Manager|Infrastructure  Managers]] use the [https://support.d4science.org/ Redmine system]. For each activity the  [[Role Infrastructure  Manager|Infrastructure  Managers]] should open a "D4Science Infrastructure" RedMine ticket describing the activity to perform and assign it to a [[Role Site Manager|Site Managers]] with a '''Due Date'''.
 +
[[Role Site Manager|Site Managers]] responsible of the tasks when closing the ticket are supposed to fill the field '''Intervention Time''' with the time spent performing the task.
 +
Tickets associated with installation and upgrades are also reported in the [[Resources Upgrade|Resources Upgrade]] page. More information is available on the [http://wiki.d4science.org/index.php/Procedure_Infrastructure_upgrade Infrastructure upgrade wiki]
  
gLite components are in general expected to run on dedicated machines. For example the gLite [http://glite.web.cern.ch/glite/packages/R3.2/sl5_x86_64/deployment/glite-CREAM/glite-CREAM.asp CREAM-CE] or the gLite [http://glite.web.cern.ch/glite/packages/R3.2/sl5_x86_64/deployment/glite-SE_dpm_mysql/glite-SE_dpm_mysql.asp SE] should run in dedicated gLite nodes. However it maybe be possible to have some gLite components co-existing with other gLite/gCube nodes. For example the gLite [http://glite.web.cern.ch/glite/packages/R3.2/sl5_x86_64/deployment/glite-WN/glite-WN.asp WN] can be installed together with other gLite components or even in the same machine of a gCube node.
+
== Shiny(Proxy) apps ==
  
The gLite nodes of the D4Science Ecosystem run the following gLite components: Cream_CE, WN, DPM_SE, WMS, LB, VOMS, MyProxy, and 3 of them ( WMS, VOMS and MyProxy) still run the glite 3.1 version but they are in the process to be upgraded to gLite 3.2
+
* Shiny apps can be deployed in the infrastructure. One  [https://www.shinyproxy.io shinyproxy] cluster is available, running on the [https://docs.docker.com/engine/swarm/ Docker Swarm] cluster.
 +
* Shiny apps are docker containers that can be built following the shinyproxy guidelines at [https://www.shinyproxy.io/documentation/deploying-apps/].
  
 +
'''Build and Installation'''
  
'''Installation'''
+
A shiny app can be deployed in the D4Science Infrastructure in different ways, using the ''ShinyProxy App'' tracker in the  [https://support.d4science.org/ Redmine system]:
 +
* It can be a public app already available in [https://hub.docker.com Docker Hub] or any other public container registry. In this case, the image name and the ''run command'' are the only requirements. Docker Hub can automatically build and push images from a public repository, as described in its [https://docs.docker.com/docker-hub/builds/ builds] documentation. The general Docker Hub documentation is [https://docs.docker.com/docker-hub/ here]
 +
* '''WORK IN PROGRESS''' A build of a public image can be requested. For that activity, a public repository must be accessible from our [https://jenkins.d4science.org Jenkins] instance so that the process can be automated. The result container image will be uploaded into [https://hub.docker.com Docker Hub].
 +
* '''WORK IN PROGRESS''' A build of a private image can be requested. For that activity, a repository must be accessible from our [https://jenkins.d4science.org Jenkins] instance so that the process can be automated. The result container image will be uploaded into the D4Science's private registry.
  
The default installation method for SLC5 packages is the [http://yum.baseurl.org/ YUM] tool. All gLite components have a YUM meta-packages associated.
+
== Docker containers ==
  
The configuration of gLite nodes is performed by a set of shell scripts provided by the [https://twiki.cern.ch/twiki/bin/view/LCG/YaimGuide400 YAIM] framework (the tool doesn't support the installation of gLite). The provided configuration scripts can be used by [[Role Site Manager|Site Managers]] with no need for in-depth knowledge of specific middleware configuration details. They must adapt some configuration files, according to provided examples. The resulting configuration is a default site configuration.
+
A [https://docs.docker.com/engine/swarm/ Docker Swarm] cluster is available to deploy and run Docker containers. Only Docker containers are supported at this time.
 
+
Detailed instructions about the installation of gLite can be found in the [https://twiki.cern.ch/twiki/bin/view/LCG/GenericInstallGuide320 gLite 3.2 Installation Guide]. iMarine specify configuration examples can be found [[Nodes_Deployment_Configurations|here]].
+
 
+
 
+
'''Upgrade'''
+
  
Upgrades in gLite are done on a per component basis. Each upgrade is associated with one web page containing the details of the upgrade and the list of affected components. All updates are announced via the EGI [https://cic.gridops.org CIC Portal] broadcast tool and are requested on a regular basis. Sites are asked to keep their installations up-to-date with respect to the latest update released.
+
'''Build and Installation'''
  
Detailed instructions about the upgrade of gLite can be found in the [https://twiki.cern.ch/twiki/bin/view/LCG/GenericInstallGuide320 gLite 3.2 Installation Guide].
+
A container can be deployed in the D4Science Infrastructure in different ways, using the ''Docker Image'' tracker in the  [https://support.d4science.org/ Redmine system]:
 +
* It can be a public container already available in [https://hub.docker.com Docker Hub] or any other public container registry. Docker Hub can automatically build and push images from a public repository, as described in its [https://docs.docker.com/docker-hub/builds/ builds] documentation. The general Docker Hub documentation is [https://docs.docker.com/docker-hub/ here].
 +
*  '''WORK IN PROGRESS'''  A build of a public image can be requested. For that activity, a public repository must be accessible from our [https://jenkins.d4science.org Jenkins] instance so that the process can be automated. The result container image will be uploaded into [https://hub.docker.com Docker Hub] and then deployed into the Swarm cluster.
 +
*  '''WORK IN PROGRESS'''  A build of a private image can be requested. For that activity, a repository must be accessible from our [https://jenkins.d4science.org Jenkins] instance so that the process can be automated. The result container image will be uploaded into the D4Science's private registry and then deployed int the Swarm cluster.
  
== Hadoop Nodes ==
+
The image name, the replica factor and, where appropriate, the external configuration and storage requirements must be specified in the request.
 +
 +
== Hadoop and Runtime Resources==
  
Due to the variety of installations methods (single node, VM Master/slave, via cloud systems), the Hadoop nodes installation does not follow a predefined procedure.
+
Hadoop and Runtime Resources installation and upgrades due to the diverse nature of Service and installation type does not follow a predefined installation or upgrade procedure, but as for the gCube Resources each action is associated with a Redmine ticket where [[Role Site Manager|Site Managers]]  have to report the Intervention Time spent.

Latest revision as of 18:58, 23 September 2021

Different deployment procedures apply for gCube, Hadoop and Runtime Resources.

gCube Resources

The gCube nodes of the D4ScienceAn e-Infrastructure operated by the D4Science.org initiative. Infrastructure can be deployed in 32 and 64 bits machines and supports several Linux distributions. It has been tested on CERN Scientific Linux, RedHat Enterprise, Ubuntu, and Fedora.

A gCube node of the D4ScienceAn e-Infrastructure operated by the D4Science.org initiative. Infrastructure is composed by two main constituents:

  1. A base gHN-distribution or SmartGears distribution Managed locally by Site Managers;
  2. gCube services running on the gHN or on the Smartgears container. Managed remotely by VO Admins and the VRE Managers.


Installation

  1. gHN - The gHN distribution is available from the gCube website. The Administrator Guide provides detailed information about the gHN installation process.
  2. SmartGears - The SmartGears distribution is available from the gCube website. The Smargears installation guide provides information about the Smartgears installation process
  3. gCube Service - gCube services are installed when new VOs/VREs are deployed. Check the VO Creation and VRE Creation procedures.


Upgrade

  1. gHN and SmartGears - The upgrade of gHNs is based on upgrade plans published in the Resources Upgrade page.
  2. gCube - The upgrade of gCube services is based on upgrade plans published in the Resources Upgrade page.


In order to coordinate Installation and Upgrade activities the Infrastructure Managers use the Redmine system. For each activity the Infrastructure Managers should open a "D4ScienceAn e-Infrastructure operated by the D4Science.org initiative. Infrastructure" RedMine ticket describing the activity to perform and assign it to a Site Managers with a Due Date. Site Managers responsible of the tasks when closing the ticket are supposed to fill the field Intervention Time with the time spent performing the task. Tickets associated with installation and upgrades are also reported in the Resources Upgrade page. More information is available on the Infrastructure upgrade wiki

Shiny(Proxy) apps

  • Shiny apps can be deployed in the infrastructure. One shinyproxy cluster is available, running on the Docker Swarm cluster.
  • Shiny apps are docker containers that can be built following the shinyproxy guidelines at [1].

Build and Installation

A shiny app can be deployed in the D4ScienceAn e-Infrastructure operated by the D4Science.org initiative. Infrastructure in different ways, using the ShinyProxy App tracker in the Redmine system:

  • It can be a public app already available in Docker Hub or any other public container registry. In this case, the image name and the run command are the only requirements. Docker Hub can automatically build and push images from a public repository, as described in its builds documentation. The general Docker Hub documentation is here
  • WORK IN PROGRESS A build of a public image can be requested. For that activity, a public repository must be accessible from our Jenkins instance so that the process can be automated. The result container image will be uploaded into Docker Hub.
  • WORK IN PROGRESS A build of a private image can be requested. For that activity, a repository must be accessible from our Jenkins instance so that the process can be automated. The result container image will be uploaded into the D4ScienceAn e-Infrastructure operated by the D4Science.org initiative.'s private registry.

Docker containers

A Docker Swarm cluster is available to deploy and run Docker containers. Only Docker containers are supported at this time.

Build and Installation

A container can be deployed in the D4ScienceAn e-Infrastructure operated by the D4Science.org initiative. Infrastructure in different ways, using the Docker Image tracker in the Redmine system:

  • It can be a public container already available in Docker Hub or any other public container registry. Docker Hub can automatically build and push images from a public repository, as described in its builds documentation. The general Docker Hub documentation is here.
  • WORK IN PROGRESS A build of a public image can be requested. For that activity, a public repository must be accessible from our Jenkins instance so that the process can be automated. The result container image will be uploaded into Docker Hub and then deployed into the Swarm cluster.
  • WORK IN PROGRESS A build of a private image can be requested. For that activity, a repository must be accessible from our Jenkins instance so that the process can be automated. The result container image will be uploaded into the D4ScienceAn e-Infrastructure operated by the D4Science.org initiative.'s private registry and then deployed int the Swarm cluster.

The image name, the replica factor and, where appropriate, the external configuration and storage requirements must be specified in the request.

Hadoop and Runtime Resources

Hadoop and Runtime Resources installation and upgrades due to the diverse nature of Service and installation type does not follow a predefined installation or upgrade procedure, but as for the gCube Resources each action is associated with a Redmine ticket where Site Managers have to report the Intervention Time spent.