Difference between revisions of "Procedure Infrastructure Deployment"
Andrea.manzi (Talk | contribs) (→Hadoop and Runtime Resources) |
|||
(21 intermediate revisions by 4 users not shown) | |||
Line 1: | Line 1: | ||
__TOC__ | __TOC__ | ||
− | Different deployment procedures apply for gCube | + | Different deployment procedures apply for gCube, Hadoop and Runtime Resources. |
− | + | ||
== gCube Resources == | == gCube Resources == | ||
− | The gCube nodes of the D4Science | + | The gCube nodes of the D4Science Infrastructure can be deployed in 32 and 64 bits machines and supports several Linux distributions. It has been tested on [http://linuxsoft.cern.ch CERN Scientific Linux], [http://www.redhat.com/rhel RedHat Enterprise], [http://www.ubuntu.com Ubuntu], and [http://fedoraproject.org Fedora]. |
− | A gCube node of the D4Science | + | A gCube node of the D4Science Infrastructure is composed by two main constituents: |
− | # A base gHN-distribution | + | # A base gHN-distribution or SmartGears distribution Managed locally by [[Role Site Manager|Site Managers]]; |
− | # gCube services running on the gHN. Managed remotely by [[Role VO Admin|VO Admins]] and the [[Role VRE Manager|VRE Managers]]. | + | # gCube services running on the gHN or on the Smartgears container. Managed remotely by [[Role VO Admin|VO Admins]] and the [[Role VRE Manager|VRE Managers]]. |
Line 15: | Line 14: | ||
# gHN - The gHN distribution is available from the [http://www.gcube-system.org/ gCube ] website. The [https://gcore.wiki.gcube-system.org/gCube/index.php/Administrator_Guide Administrator Guide] provides detailed information about the gHN installation process. | # gHN - The gHN distribution is available from the [http://www.gcube-system.org/ gCube ] website. The [https://gcore.wiki.gcube-system.org/gCube/index.php/Administrator_Guide Administrator Guide] provides detailed information about the gHN installation process. | ||
+ | #SmartGears - The SmartGears distribution is available from the [http://www.gcube-system.org/ gCube ] website. The [https://wiki.gcube-system.org/index.php/SmartGears_gHN_Installation Smargears installation guide] provides information about the Smartgears installation process | ||
# gCube Service - gCube services are installed when new VOs/VREs are deployed. Check the [[Procedure VO Creation|VO Creation]] and [[Procedure VRE Creation|VRE Creation]] procedures. | # gCube Service - gCube services are installed when new VOs/VREs are deployed. Check the [[Procedure VO Creation|VO Creation]] and [[Procedure VRE Creation|VRE Creation]] procedures. | ||
Line 20: | Line 20: | ||
'''Upgrade''' | '''Upgrade''' | ||
− | # gHN - The upgrade of gHNs is based on upgrade plans published in the [[Resources Upgrade|Resources Upgrade]] page | + | # gHN and SmartGears - The upgrade of gHNs is based on upgrade plans published in the [[Resources Upgrade|Resources Upgrade]] page. |
− | # gCube - The upgrade of gCube services is based on upgrade plans published in the [[Resources Upgrade|Resources Upgrade]] page | + | # gCube - The upgrade of gCube services is based on upgrade plans published in the [[Resources Upgrade|Resources Upgrade]] page. |
− | In order to coordinate Installation and Upgrade activities the [[Role Infrastructure Manager|Infrastructure Managers]] use the | + | In order to coordinate Installation and Upgrade activities the [[Role Infrastructure Manager|Infrastructure Managers]] use the [https://support.d4science.org/ Redmine system]. For each activity the [[Role Infrastructure Manager|Infrastructure Managers]] should open a "D4Science Infrastructure" RedMine ticket describing the activity to perform and assign it to a [[Role Site Manager|Site Managers]] with a '''Due Date'''. |
− | [[Role Site Manager|Site Managers]] responsible of the tasks when closing the ticket are supposed to fill the | + | [[Role Site Manager|Site Managers]] responsible of the tasks when closing the ticket are supposed to fill the field '''Intervention Time''' with the time spent performing the task. |
− | Tickets associated with installation and upgrades are also reported in the [[Resources Upgrade|Resources Upgrade]] page. | + | Tickets associated with installation and upgrades are also reported in the [[Resources Upgrade|Resources Upgrade]] page. More information is available on the [http://wiki.d4science.org/index.php/Procedure_Infrastructure_upgrade Infrastructure upgrade wiki] |
− | == | + | == Shiny(Proxy) apps == |
− | + | * Shiny apps can be deployed in the infrastructure. One [https://www.shinyproxy.io shinyproxy] cluster is available, running on the [https://docs.docker.com/engine/swarm/ Docker Swarm] cluster. | |
+ | * Shiny apps are docker containers that can be built following the shinyproxy guidelines at [https://www.shinyproxy.io/documentation/deploying-apps/]. | ||
− | + | '''Build and Installation''' | |
− | + | A shiny app can be deployed in the D4Science Infrastructure in different ways, using the ''ShinyProxy App'' tracker in the [https://support.d4science.org/ Redmine system]: | |
+ | * It can be a public app already available in [https://hub.docker.com Docker Hub] or any other public container registry. In this case, the image name and the ''run command'' are the only requirements. Docker Hub can automatically build and push images from a public repository, as described in its [https://docs.docker.com/docker-hub/builds/ builds] documentation. The general Docker Hub documentation is [https://docs.docker.com/docker-hub/ here] | ||
+ | * '''WORK IN PROGRESS''' A build of a public image can be requested. For that activity, a public repository must be accessible from our [https://jenkins.d4science.org Jenkins] instance so that the process can be automated. The result container image will be uploaded into [https://hub.docker.com Docker Hub]. | ||
+ | * '''WORK IN PROGRESS''' A build of a private image can be requested. For that activity, a repository must be accessible from our [https://jenkins.d4science.org Jenkins] instance so that the process can be automated. The result container image will be uploaded into the D4Science's private registry. | ||
+ | == Docker containers == | ||
− | + | A [https://docs.docker.com/engine/swarm/ Docker Swarm] cluster is available to deploy and run Docker containers. Only Docker containers are supported at this time. | |
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | '''Build and Installation''' | |
− | + | A container can be deployed in the D4Science Infrastructure in different ways, using the ''Docker Image'' tracker in the [https://support.d4science.org/ Redmine system]: | |
+ | * It can be a public container already available in [https://hub.docker.com Docker Hub] or any other public container registry. Docker Hub can automatically build and push images from a public repository, as described in its [https://docs.docker.com/docker-hub/builds/ builds] documentation. The general Docker Hub documentation is [https://docs.docker.com/docker-hub/ here]. | ||
+ | * '''WORK IN PROGRESS''' A build of a public image can be requested. For that activity, a public repository must be accessible from our [https://jenkins.d4science.org Jenkins] instance so that the process can be automated. The result container image will be uploaded into [https://hub.docker.com Docker Hub] and then deployed into the Swarm cluster. | ||
+ | * '''WORK IN PROGRESS''' A build of a private image can be requested. For that activity, a repository must be accessible from our [https://jenkins.d4science.org Jenkins] instance so that the process can be automated. The result container image will be uploaded into the D4Science's private registry and then deployed int the Swarm cluster. | ||
+ | The image name, the replica factor and, where appropriate, the external configuration and storage requirements must be specified in the request. | ||
+ | |||
== Hadoop and Runtime Resources== | == Hadoop and Runtime Resources== | ||
− | Hadoop and Runtime Resources installation and upgrades due to the diverse nature of Service and installation type does not follow a predefined installation or | + | Hadoop and Runtime Resources installation and upgrades due to the diverse nature of Service and installation type does not follow a predefined installation or upgrade procedure, but as for the gCube Resources each action is associated with a Redmine ticket where [[Role Site Manager|Site Managers]] have to report the Intervention Time spent. |
Latest revision as of 17:58, 23 September 2021
gCube Resources
The gCube nodes of the D4ScienceAn e-Infrastructure operated by the D4Science.org initiative. Infrastructure can be deployed in 32 and 64 bits machines and supports several Linux distributions. It has been tested on CERN Scientific Linux, RedHat Enterprise, Ubuntu, and Fedora.
A gCube node of the D4ScienceAn e-Infrastructure operated by the D4Science.org initiative. Infrastructure is composed by two main constituents:
- A base gHN-distribution or SmartGears distribution Managed locally by Site Managers;
- gCube services running on the gHN or on the Smartgears container. Managed remotely by VO Admins and the VRE Managers.
Installation
- gHN - The gHN distribution is available from the gCube website. The Administrator Guide provides detailed information about the gHN installation process.
- SmartGears - The SmartGears distribution is available from the gCube website. The Smargears installation guide provides information about the Smartgears installation process
- gCube Service - gCube services are installed when new VOs/VREs are deployed. Check the VO Creation and VRE Creation procedures.
Upgrade
- gHN and SmartGears - The upgrade of gHNs is based on upgrade plans published in the Resources Upgrade page.
- gCube - The upgrade of gCube services is based on upgrade plans published in the Resources Upgrade page.
In order to coordinate Installation and Upgrade activities the Infrastructure Managers use the Redmine system. For each activity the Infrastructure Managers should open a "D4ScienceAn e-Infrastructure operated by the D4Science.org initiative. Infrastructure" RedMine ticket describing the activity to perform and assign it to a Site Managers with a Due Date.
Site Managers responsible of the tasks when closing the ticket are supposed to fill the field Intervention Time with the time spent performing the task.
Tickets associated with installation and upgrades are also reported in the Resources Upgrade page. More information is available on the Infrastructure upgrade wiki
Shiny(Proxy) apps
- Shiny apps can be deployed in the infrastructure. One shinyproxy cluster is available, running on the Docker Swarm cluster.
- Shiny apps are docker containers that can be built following the shinyproxy guidelines at [1].
Build and Installation
A shiny app can be deployed in the D4ScienceAn e-Infrastructure operated by the D4Science.org initiative. Infrastructure in different ways, using the ShinyProxy App tracker in the Redmine system:
- It can be a public app already available in Docker Hub or any other public container registry. In this case, the image name and the run command are the only requirements. Docker Hub can automatically build and push images from a public repository, as described in its builds documentation. The general Docker Hub documentation is here
- WORK IN PROGRESS A build of a public image can be requested. For that activity, a public repository must be accessible from our Jenkins instance so that the process can be automated. The result container image will be uploaded into Docker Hub.
- WORK IN PROGRESS A build of a private image can be requested. For that activity, a repository must be accessible from our Jenkins instance so that the process can be automated. The result container image will be uploaded into the D4ScienceAn e-Infrastructure operated by the D4Science.org initiative.'s private registry.
Docker containers
A Docker Swarm cluster is available to deploy and run Docker containers. Only Docker containers are supported at this time.
Build and Installation
A container can be deployed in the D4ScienceAn e-Infrastructure operated by the D4Science.org initiative. Infrastructure in different ways, using the Docker Image tracker in the Redmine system:
- It can be a public container already available in Docker Hub or any other public container registry. Docker Hub can automatically build and push images from a public repository, as described in its builds documentation. The general Docker Hub documentation is here.
- WORK IN PROGRESS A build of a public image can be requested. For that activity, a public repository must be accessible from our Jenkins instance so that the process can be automated. The result container image will be uploaded into Docker Hub and then deployed into the Swarm cluster.
- WORK IN PROGRESS A build of a private image can be requested. For that activity, a repository must be accessible from our Jenkins instance so that the process can be automated. The result container image will be uploaded into the D4ScienceAn e-Infrastructure operated by the D4Science.org initiative.'s private registry and then deployed int the Swarm cluster.
The image name, the replica factor and, where appropriate, the external configuration and storage requirements must be specified in the request.
Hadoop and Runtime Resources
Hadoop and Runtime Resources installation and upgrades due to the diverse nature of Service and installation type does not follow a predefined installation or upgrade procedure, but as for the gCube Resources each action is associated with a Redmine ticket where Site Managers have to report the Intervention Time spent.