Skip to content
Snippets Groups Projects
Commit 6528055a authored by Loraine Gueguen's avatar Loraine Gueguen
Browse files

Update README.md

parent ae63f83f
No related branches found
No related tags found
1 merge request!1Release 1.0
# gga_load_data
# gga_load_data tools
The gga_load_data tools enable automated deployment of GMOD visualisation tools (Chado, Tripal, JBrowse, Galaxy) for a bunch of genomes and datasets.
The gga_load_data tools allow automated deployment of GMOD visualisation tools (Chado, Tripal, JBrowse, Galaxy) for a bunch of genomes and datasets.
They are based on the Galaxy Genome Annotation (GGA) project (https://galaxy-genome-annotation.github.io).
A stack of Docker services will be deployed for each organism.
## Description
Automatically generates functional GGA environments from an input yaml file describing the data.
A stack of Docker services is deployed for each organism, from an input yaml file describing the data.
See `examples/example.yml` for an example of what information can be described and the correct formatting of this input file.
Each GGA environment is deployed at [https://hostname/sp/genus_species/](https://hostname/sp/genus_species/).
### Traefik
## Reverse proxy and authentication
Traefik is a reverse proxy which allows to direct HTTP traffic to various Docker Swarm services.
The Traefik dashboard is deployed at [https://hostname/traefik/](https://hostname/traefik/)
### Authentication with Authelia
The authentication layer is optional. If used, the config file needs the variables `https_port`, `auth_hostname`, `authelia_config_path`.
Authelia is an authentication agent, which can be plugged to an LDAP server, and that Traefik can you to check permissions to access services.
The authentication layer is optional. If used, the config file needs the variables `https_port`, `auth_hostname`, `authelia_config_path`.
Authelia is accessed automatically by Traefik to check permissions everytime someones wants to access a page.
If the user is not logged in, he is redirected to the authelia portal.
Note that Authelia needs a secured connexion (no self-signed certificate) between the upstream proxy and Traefik (and https between internet and the proxy).
### Steps
## Steps
The "gga_load_data" tool is divided in 4 separate scripts:
The "gga_load_data" tools are composed of 4 scripts:
- gga_init: Create directory tree for organisms and deploy stacks for the input organisms as well as Traefik and optionally Authelia stacks
- gga_get_data: Create `src_data` directory tree for organisms and copy datasets for the input organisms into the organisms directory tree
......@@ -38,11 +31,11 @@ The "gga_load_data" tool is divided in 4 separate scripts:
## Usage:
The scripts all take one mandatory input file that describes the species and their associated data
For all scripts one input file is required, that describes the species and their associated data.
(see `examples/example.yml`). Every dataset path in this file must be an absolute path.
You must also fill in a config file containing sensitive variables (Galaxy and Tripal passwords, etc..) that
the script will read to create the different services and to access the Galaxy container. By default, the config file
Another yaml file is required, the config file, with configuration variables (Galaxy and Tripal passwords, etc..) that
the scripts need to create the different services and to access the Galaxy container. By default, the config file
inside the repository root will be used if none is precised in the command line. An example of this config file is available
in the `examples` folder.
......@@ -51,7 +44,7 @@ in the `examples` folder.
- Deploy stacks part:
```bash
$ python3 /path/to/repo/gga_init.py your_input_file.yml -c/--config your_config_file [-v/--verbose] [OPTIONS]
$ python3 /path/to/repo/gga_init.py input_file.yml -c/--config config_file [-v/--verbose] [OPTIONS]
--main-directory $PATH (Path where to create/update stacks; default=current directory)
--force-traefik (If specified, will overwrite traefik and authelia files; default=False)
```
......@@ -59,28 +52,28 @@ $ python3 /path/to/repo/gga_init.py your_input_file.yml -c/--config your_config_
- Copy source data file:
```bash
$ python3 /path/to/repo/gga_get_data.py your_input_file.yml [-v/--verbose] [OPTIONS]
$ python3 /path/to/repo/gga_get_data.py input_file.yml [-v/--verbose] [OPTIONS]
--main-directory $PATH (Path where to access stacks; default=current directory)
```
- Load data in Galaxy library and prepare Galaxy instance:
```bash
$ python3 /path/to/repo/gga_load_data.py your_input_file.yml -c/--config your_config_file [-v/--verbose]
$ python3 /path/to/repo/gga_load_data.py input_file.yml -c/--config config_file [-v/--verbose]
--main-directory $PATH (Path where to access stacks; default=current directory)
```
- Run a workflow in galaxy:
```bash
$ python3 /path/to/repo/gga_load_data.py your_input_file.yml -c/--config your_config_file --workflow /path/to/workflow.ga [-v/--verbose] [OPTIONS]
$ python3 /path/to/repo/gga_load_data.py input_file.yml -c/--config config_file --workflow /path/to/workflow.ga [-v/--verbose] [OPTIONS]
--workflow $WORKFLOW (Path to the workflow to run in galaxy. A couple of preset workflows are available in the "workflows" folder of the repository)
--main-directory $PATH (Path where to access stacks; default=current directory)
```
## Directory tree:
For every input organism, a dedicated directory is created with `gga_get_data.py`. The script will create this directory and all subdirectories required.
For every input organism, a dedicated directory is created with `gga_get_data.py`. The script creates this directory and all subdirectories required.
If the user is adding new data to a species (for example adding another strain dataset to the same species), the directory tree will be updated
......@@ -135,14 +128,14 @@ Directory tree structure:
## Current limitations
The stacks deployment and the data loading into Galaxy should hence be run separately and only once the Galaxy service is ready.
The `gga_load_data.py` script will check that the Galaxy service is ready before loading the data and will exit with a notification if it is not.
The stacks deployment and the data loading into Galaxy should be run separately and only once the Galaxy service is ready.
The `gga_load_data.py` script check that the Galaxy service is ready before loading the data and exit with a notification if it is not.
You can check the status of the Galaxy service with `$ docker service logs -f genus_species_galaxy` or
The status of the Galaxy service can be checked manually with `$ docker service logs -f genus_species_galaxy` or
`./serexec genus_species_galaxy supervisorctl status`.
When deploying the stack of services, the Galaxy service can take a long time to be ready. This is due to the Galaxy container preparing a persistent location for the container data.
In development mode only, this can be bypassed by setting the variable `persist_galaxy_data` to `False` in the config file.
When deploying the stack of services, the Galaxy service can take a long time to be ready, because of the data persistence.
In development mode only, this can be disabled by setting the variable `persist_galaxy_data` to `False` in the config file.
## Requirements
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment