Automated integration of new organisms into GGA environments as a form of a docker stack of services.
The gga_load_data tools enable automated deployment of GMOD visualisation tools (Chado, Tripal, JBrowse, Galaxy) for a bunch of genomes and datasets.
They are based on the Galaxy Genome Annotation (GGA) project (https://galaxy-genome-annotation.github.io).
A stack of Docker services will be deployed for each organism.
## Description:
Automatically generate functional GGA environments from a descriptive input yaml file.
See example datasets (example.yml) for an example of what information can be described
and the correct formatting of these input files
## Description
Automatically generates functional GGA environments from a descriptive input yaml file.
See `examples/example.yml` for an example of what information can be described
and the correct formatting of this input file.
The "gga_load_data" tool is divided in 4 separate scripts:
- gga_init: Create directory tree for organisms and deploy stacks for the input organisms as well as Traefik and optionally Authelia stacks
- gga_get_data: Create "src_data" directory tree for organisms and copy datasets for the input organisms into the organisms directory tree
- gga_load_data: Load the datasets of the input organisms into a library in their galaxy instance
- run_workflow_phaeoexplorer: Remotely run a custom workflow in galaxy, proposed as an "example script" to take inspiration from as workflow parameters are specific to Phaeoexplorer data
- gga_get_data: Create `src_data` directory tree for organisms and copy datasets for the input organisms into the organisms directory tree
- gga_load_data: Load the datasets of the input organisms into their Galaxy library
- run_workflow_phaeoexplorer: Remotely run a custom workflow in Galaxy, proposed as an "example script" to take inspiration from as workflow parameters are specific to Phaeoexplorer data
## Directory tree:
For every input organism, a dedicated directory is created. The script will create this directory and all subdirectories required.
If the user is adding new data to a species (for example adding another strain/sex's datasets to the same species), the directory tree will be updated
If the user is adding new data to a species (for example adding another strain dataset to the same species), the directory tree will be updated
Directory tree structure:
```
@@ -60,10 +64,6 @@ Directory tree structure:
| |
| |---/docker-compose.yml
| |
| |---/metada_genus1_species1.yml (WIP)
|
|---/metadata.yml
|
|---/traefik
|---/docker-compose.yml
|---/authelia
@@ -73,44 +73,51 @@ Directory tree structure:
```
## Usage:
The scripts all take one mandatory input file that describes your species and their associated data
(see example.yml in the "examples" folder of the repository). Every dataset path in this input must be an absolute path.
You must also fill in a "config" file containing sensible variables (galaxy and tripal passwords, etc..) that
the script will read to create the different services and to access the galaxy container. By default, the config file
The scripts all take one mandatory input file that describes the species and their associated data
(see `examples/example.yml`). Every dataset path in this file must be an absolute path.
You must also fill in a config file containing sensitive variables (Galaxy and Tripal passwords, etc..) that
the script will read to create the different services and to access the Galaxy container. By default, the config file
inside the repository root will be used if none is precised in the command line. An example of this config file is available
in the "examples" folder of the repository.
in the `examples` folder.
**Warning: the config file is not required as an option for the "gga_init" and "gga_get_data" scripts**
--workflow $WORKFLOW (Path to the workflow to run in galaxy. A couple of preset workflows are available in the "workflows" folder of the repository)
--main-directory $PATH (Path where to access stacks; default=current directory)
--main-directory $PATH (Path where to access stacks; default=current directory)```
**Warning: the "input file" and "config file" have to be the same for all scripts!**
**The input file and config file have to be the same for all scripts!**
## Current limitations
When deploying the stack of services, the galaxy service can take a long time to be ready. This is due to the galaxy container preparing a persistent location for the container data. This can be bypassed by setting the variable "persist_galaxy_data" to "True" in the script "config" YAML file
The stacks deployment and the data loading into galaxy should hence be run separately and only once the galaxy service is ready
When deploying the stack of services, the Galaxy service can take a long time to be ready. This is due to the Galaxy container preparing a persistent location for the container data. This can be bypassed by setting the variable "persist_galaxy_data" to "False" in the config file.
The stacks deployment and the data loading into Galaxy should hence be run separately and only once the Galaxy service is ready.
The `gga_load_data.py` script will check that the Galaxy service is ready before loading the data and will exit with a notification if it is not.
To check the status of the galaxy service, you can run ```$ docker service logs -f genus_species_galaxy``` or
```./serexec genus_species_galaxy supervisorctl status``` to verify directly from the container
You can check the status of the Galaxy service with `$ docker service logs -f genus_species_galaxy` or