new doc and methods on models for user input
Merge remote-tracking branch 'origin/argo_workflow'
@ -4,7 +4,8 @@ WORKDIR /app
|
||||
|
||||
COPY . .
|
||||
|
||||
RUN go get github.com/beego/bee/v2 && go install github.com/beego/bee/v2@master
|
||||
RUN go get github.com/beego/bee/v2 && \
|
||||
go install github.com/beego/bee/v2@master
|
||||
|
||||
# Generating routers/commentsRouter.go
|
||||
RUN bee generate routers
|
||||
@ -12,8 +13,8 @@ RUN bee generate routers
|
||||
# Generating the swagger
|
||||
RUN timeout 20 bee run -gendoc=true -downdoc=true -runmode=dev || :
|
||||
|
||||
RUN sed -i 's/http:\/\/127.0.0.1:8080\/swagger\/swagger.json/swagger.json/g' swagger/index.html
|
||||
RUN sed -i 's/https:\/\/petstore.swagger.io\/v2\/swagger.json/swagger.json/g' swagger/index.html
|
||||
RUN sed -i 's/http:\/\/127.0.0.1:8080\/swagger\/swagger.json/swagger.json/g' swagger/index.html && \
|
||||
sed -i 's/https:\/\/petstore.swagger.io\/v2\/swagger.json/swagger.json/g' swagger/index.html
|
||||
|
||||
RUN ls -l routers
|
||||
|
||||
|
@ -59,4 +59,8 @@ From the root of the projet run :
|
||||
|
||||
`./scripts/multinode.sh ./scripts/demo.json`
|
||||
|
||||
This script should be updated to be ran from anywhere.
|
||||
This script should be updated to be ran from anywhere.
|
||||
|
||||
# More documentation
|
||||
|
||||
[Visit the docs/ directory](/docs/)
|
BIN
docs/UML/diag_class_workflow.jpg
Normal file
After Width: | Height: | Size: 60 KiB |
70
docs/components/components_specification.md
Normal file
@ -0,0 +1,70 @@
|
||||
This documents aims to describe the role of each component in the catalog. It textually describes their attributes in order for anyone involved in the development to grasp their role and also identify some missing features/attributes.
|
||||
|
||||
This document should be accompanied of a diagram that summarizes it.
|
||||
|
||||
# Components description
|
||||
|
||||
As a user of oc-catalog I want to be able to create a workflow, which represents the flow of data between different components : computing, datacenter, data and storage.
|
||||
|
||||
Each component has a name, a logo, a short and a long description.
|
||||
|
||||
## Computing
|
||||
|
||||
A computing component is used to execute the docker image in it's **command** attribute. A computing component **must** be linked with a datacenter component, where it will be executed.
|
||||
|
||||
It has two required fields **CPU** and **RAM** which describe the minimum amount of calculating ressources needed to execute it.
|
||||
|
||||
Optionnaly, it can have a value in the **GPU** field.
|
||||
|
||||
For each instance of a computing component we can specify :
|
||||
- an other entrypoint to the image : this must be specified after the name of the image in **command**.
|
||||
- **arguments**, which will be passed to the entrypoint
|
||||
- **Environment variables**
|
||||
|
||||
The fields **input** and **output** list the different links coming in and out of the computing components.
|
||||
> [!] This is redundant with the Links object that we create when parsing the XML in oc-scheduler, might be better to remove them if proved redundant
|
||||
|
||||
|
||||
## Datacenter
|
||||
|
||||
A datacenter is identified by its **DC acronym** which is a very short form of its name.
|
||||
|
||||
**Note** : as of now, this dc cronym field is used a primary key in order to link other components to a datacenter. This might be a sign that using a NoSQL db in the future might not be the best option.
|
||||
|
||||
Each datacenter must declare :
|
||||
- its **Memory**, composed of two field : **ecc** (error-correcting code) and **size** (in MB)
|
||||
- its **CPU** which is composed of :
|
||||
- its **cores** number
|
||||
- a boolean to declare if the cores are **shared** or not
|
||||
- its **architecture**
|
||||
- its **pltaform**
|
||||
- the **minimum memory** needed
|
||||
|
||||
Finally, we can add **GPU**s to a datacenter, they are characterized by :
|
||||
- Their number of **couda cores**
|
||||
- number of **tensor cores**
|
||||
- their **size** (Mb)
|
||||
- their **model**
|
||||
|
||||
## Data
|
||||
|
||||
This component represent a data source, we want to know what **type** of data they produce. They have a base64 encoded **example** of the final data structure.
|
||||
|
||||
The source **URL** must be specified, as well as the **protocol**.
|
||||
|
||||
> ! Hence, maybe these two field should merged, and only have an URL that indicates its protocol.
|
||||
|
||||
## Storage
|
||||
|
||||
Storage components are linked to a datacenter, and used to store the result of a computing component.
|
||||
|
||||
Storage component are associated with a datacenter with its **dc acronyme**. They also have an **URL** to reach them. A storage component has a storage **size** and some optionnal field :
|
||||
- **crypted** storage
|
||||
- the type of **redundancy**
|
||||
- its **throughput**
|
||||
|
||||
Finally they have a **price**
|
||||
|
||||
# Diagram
|
||||
|
||||
![](models_oc-catalog.jpg)
|
BIN
docs/components/models_oc-catalog.jpg
Normal file
After Width: | Height: | Size: 59 KiB |
12
docs/identified_problems.md
Normal file
@ -0,0 +1,12 @@
|
||||
# Code
|
||||
|
||||
- [ ] In most of the components from 'models/' we have a method to add input and output to the model, however this linking of components is already done in oc-schedule when parsing the MxGraph. We need to determine if adding relations between components inside the objects themself is necessary.
|
||||
- When running in debug mode with a breakpoint inside the first line of computing.addLink it is only called once
|
||||
- [ ]
|
||||
|
||||
## MxGraph
|
||||
|
||||
- [ ] The ConsumeMxGraphModel is way too long, it should refactored and broken down in different sub methods
|
||||
- mxcell are put inside an <object> tag when the settings have been opened, wether values have been set or not. Maybe we could find a way to make mxgraph add these whenever we add a component to the graph.
|
||||
- then identify the links only
|
||||
- [ ] It is unclear what are the inputs and the ouputs. It seems like they were implemented to link two components, but it seems redundant with the identification of links
|
6
docs/lexicon.md
Normal file
@ -0,0 +1,6 @@
|
||||
- rType : ressource type, can only be :
|
||||
- rtype.DATA
|
||||
- rtype.COMPUTING
|
||||
- rtype.STORAGE
|
||||
- rtype.DATACENTER
|
||||
- rtype.INVALID if it doesn't match with any of the previous type
|
@ -1,6 +1,9 @@
|
||||
package models
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"strings"
|
||||
|
||||
"cloud.o-forge.io/core/oc-catalog/models/rtype"
|
||||
"cloud.o-forge.io/core/oc-catalog/services"
|
||||
"github.com/beego/beego/v2/core/logs"
|
||||
@ -27,30 +30,30 @@ type RepositoryModel struct {
|
||||
}
|
||||
|
||||
type ComputingNEWModel struct {
|
||||
Description string `json:"description,omitempty" required:"true"`
|
||||
Name string `json:"name,omitempty" required:"true" validate:"required" description:"Name of the computing"`
|
||||
Description string `json:"description,omitempty" required:"true"`
|
||||
ShortDescription string `json:"short_description,omitempty" required:"true" validate:"required"`
|
||||
Logo string `json:"logo,omitempty" required:"true" validate:"required"`
|
||||
|
||||
Type string `json:"type,omitempty" required:"true"`
|
||||
// Type string `json:"type,omitempty" required:"true"`
|
||||
Owner string `json:"owner,omitempty"`
|
||||
License string `json:"license,omitempty"`
|
||||
Price uint `json:"price,omitempty"`
|
||||
|
||||
ExecutionRequirements ExecutionRequirementsModel `json:"execution_requirements,omitempty"`
|
||||
|
||||
Dinputs []string `json:"dinputs,omitempty"`
|
||||
Doutputs []string `json:"doutputs,omitempty"`
|
||||
// Dinputs []string `json:"dinputs,omitempty"` // Possibly redundant with Links object in oc-schedule
|
||||
// Doutputs []string `json:"doutputs,omitempty"` // Possibly redundant with Links objects in oc-schedule
|
||||
|
||||
Image string `json:"image,omitempty"`
|
||||
Command string `json:"command,omitempty"`
|
||||
Arguments []string `json:"arguments,omitempty"`
|
||||
Environment []string `json:"environment,omitempty"`
|
||||
Ports []string `json:"ports,omitempty"`
|
||||
// Ports []string `json:"ports,omitempty"`
|
||||
|
||||
CustomDeployment string `json:"custom_deployment,omitempty"`
|
||||
// CustomDeployment string `json:"custom_deployment,omitempty"`
|
||||
|
||||
Repository RepositoryModel `json:"repository,omitempty"`
|
||||
// Repository RepositoryModel `json:"repository,omitempty"`
|
||||
}
|
||||
|
||||
type ComputingModel struct {
|
||||
@ -170,3 +173,20 @@ func GetMultipleComputing(IDs []string) (object *[]ComputingModel, err error) {
|
||||
func PostOneComputing(obj ComputingNEWModel) (ID string, err error) {
|
||||
return postOneResource(obj, rtype.COMPUTING)
|
||||
}
|
||||
|
||||
func (obj ComputingModel) AddUserInput(inputs map[string]interface{} ){
|
||||
// So far only a few input to handle so a switch with a case for each type of attribute
|
||||
// is enough, to prevent too much complexity
|
||||
for key, value := range inputs {
|
||||
switch strings.ToLower(key) {
|
||||
case "command":
|
||||
obj.Command = value.(string)
|
||||
case "arguments":
|
||||
obj.Arguments = value.([]string)
|
||||
case "env" :
|
||||
obj.Environment = value.([]string)
|
||||
default:
|
||||
logs.Alert(fmt.Printf("%s is not an attribute of storage componants", key))
|
||||
}
|
||||
}
|
||||
}
|
@ -11,15 +11,15 @@ import (
|
||||
// TODO: review why swagger are not using the metadata when we do herarchy
|
||||
type DataNEWModel struct {
|
||||
Name string `json:"name,omitempty" required:"true" validate:"required" description:"Name of the data"`
|
||||
Description string `json:"description" required:"true" validate:"required"`
|
||||
ShortDescription string `json:"short_description" required:"true" validate:"required"`
|
||||
Logo string `json:"logo" required:"true" validate:"required"`
|
||||
Description string `json:"description" required:"true" validate:"required"`
|
||||
|
||||
// Dtype string `json:"dtype"`
|
||||
Type string `json:"type,omitempty" required:"true" validate:"required" description:"Define type of data" example:"file"`
|
||||
Example string `json:"example" required:"true" validate:"required" description:"base64 encoded data"`
|
||||
Location string `json:"location" required:"true" validate:"required"`
|
||||
Dtype string `json:"dtype"`
|
||||
Protocol []string `json:"protocol"` //TODO Enum type
|
||||
Location string `json:"location" required:"true" validate:"required"`
|
||||
}
|
||||
|
||||
type DataModel struct {
|
||||
|
@ -32,9 +32,9 @@ type DatacenterGpuModel struct {
|
||||
|
||||
type DatacenterNEWModel struct {
|
||||
Name string `json:"name" required:"true"`
|
||||
Type string `json:"type,omitempty" required:"true"`
|
||||
// Type string `json:"type,omitempty" required:"true"`
|
||||
Acronym string `json:"acronym" required:"true" description:"id of the DC"`
|
||||
Hosts []string `json:"hosts" required:"true" description:"list of host:port"`
|
||||
// Hosts []string `json:"hosts" required:"true" description:"list of host:port"`
|
||||
Description string `json:"description" required:"true"`
|
||||
ShortDescription string `json:"short_description" required:"true" validate:"required"`
|
||||
Logo string `json:"logo" required:"true" validate:"required"`
|
||||
|
@ -1,6 +1,9 @@
|
||||
package models
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"strings"
|
||||
|
||||
"cloud.o-forge.io/core/oc-catalog/models/rtype"
|
||||
"cloud.o-forge.io/core/oc-catalog/services"
|
||||
"github.com/beego/beego/v2/core/logs"
|
||||
@ -15,6 +18,7 @@ type StorageNEWModel struct {
|
||||
Type string `json:"type,omitempty" required:"true"`
|
||||
|
||||
DCacronym string `json:"DCacronym" required:"true" description:"Unique ID of the DC where it is the storage"`
|
||||
URL string `json:"URL"`
|
||||
|
||||
Size uint `json:"size" required:"true"`
|
||||
Encryption bool `json:"encryption" `
|
||||
@ -136,3 +140,16 @@ func GetMultipleStorage(IDs []string) (object *[]StorageModel, err error) {
|
||||
|
||||
return object, err
|
||||
}
|
||||
|
||||
func (obj StorageModel) AddUserInput(inputs map[string]interface{} ){
|
||||
// So far only a few input to handle so a switch with a case for each type of attribute
|
||||
// is enough, to prevent too much complexity
|
||||
for key, value := range inputs {
|
||||
switch strings.ToLower(key) {
|
||||
case "URL":
|
||||
obj.URL = value.(string)
|
||||
default:
|
||||
logs.Alert(fmt.Printf("%s is not an attribute of storage componants", key))
|
||||
}
|
||||
}
|
||||
}
|
@ -564,9 +564,9 @@ func FindSliceInSlice(slice1 []string, slice2 []string) (int, int, bool) {
|
||||
return -1, -1, false
|
||||
}
|
||||
|
||||
func (w Workspace) ConsumeMxGraphModel(xmlmodel MxGraphModel) (ret *Workflow, err error, issues []error) {
|
||||
func (w Workspace) ConsumeMxGraphModel(xmlmodel MxGraphModel) (returned_wf *Workflow, err error, issues []error) {
|
||||
|
||||
ret = &Workflow{}
|
||||
returned_wf = &Workflow{}
|
||||
|
||||
// When we will iterate over the full array of cells, we first will register the resources
|
||||
// and after the linkage between them
|
||||
@ -574,6 +574,18 @@ func (w Workspace) ConsumeMxGraphModel(xmlmodel MxGraphModel) (ret *Workflow, er
|
||||
return xmlmodel.Root.MxCell[i].RID != nil
|
||||
})
|
||||
|
||||
// For each cell of the xml graph,
|
||||
// in the case cell has a rID retrieve its rType from the value of rID of the componant in the worfklow
|
||||
// retrieve the componant's type
|
||||
// create an object from the rType
|
||||
// update the existing workflow with the new componant
|
||||
// or by defautlt : the cell represents an arrow
|
||||
// if the source or the target of the arrow is a datacenter
|
||||
// define which end of the arrow is the DC
|
||||
// if the other other end of the arrow is a computing component
|
||||
// create a computing object
|
||||
// attach the DC to it
|
||||
// update the workflow with the object : create the list of this type of component or update the list with the id of the component with the object
|
||||
for _, cell := range xmlmodel.Root.MxCell {
|
||||
|
||||
switch {
|
||||
@ -595,10 +607,11 @@ func (w Workspace) ConsumeMxGraphModel(xmlmodel MxGraphModel) (ret *Workflow, er
|
||||
nil
|
||||
}
|
||||
|
||||
resObj := ret.CreateResourceObject(rType)
|
||||
resObj := returned_wf.CreateResourceObject(rType)
|
||||
resObj.setReference(rIDObj)
|
||||
|
||||
ret.UpdateObj(resObj, cell.ID)
|
||||
returned_wf.UpdateObj(resObj, cell.ID)
|
||||
|
||||
case cell.ID == "0" || cell.ID == "1":
|
||||
// ID 0 and 1 are special cases of mxeditor
|
||||
continue
|
||||
@ -606,8 +619,8 @@ func (w Workspace) ConsumeMxGraphModel(xmlmodel MxGraphModel) (ret *Workflow, er
|
||||
|
||||
default:
|
||||
// Not root nor resource. Should be only links
|
||||
sourceObj := ret.GetResource(cell.Source)
|
||||
targetObj := ret.GetResource(cell.Target)
|
||||
sourceObj := returned_wf.GetResource(cell.Source)
|
||||
targetObj := returned_wf.GetResource(cell.Target)
|
||||
|
||||
if sourceObj == nil || targetObj == nil {
|
||||
if sourceObj == nil && targetObj == nil {
|
||||
@ -633,18 +646,18 @@ func (w Workspace) ConsumeMxGraphModel(xmlmodel MxGraphModel) (ret *Workflow, er
|
||||
datacenterLinked = cell.Source
|
||||
}
|
||||
|
||||
switch ret.GetResource(datacenterLinked).getRtype() {
|
||||
switch returned_wf.GetResource(datacenterLinked).getRtype() {
|
||||
case rtype.COMPUTING:
|
||||
computingObj := ret.GetResource(datacenterLinked).(*ComputingObject)
|
||||
computingObj := returned_wf.GetResource(datacenterLinked).(*ComputingObject)
|
||||
|
||||
// We should always get a ID because we already registered resources and discarded which doesn't correspond to existent models
|
||||
computingObj.DataCenterID = *datacenter
|
||||
ret.UpdateObj(computingObj, *datacenterLinked)
|
||||
returned_wf.UpdateObj(computingObj, *datacenterLinked)
|
||||
}
|
||||
|
||||
} else {
|
||||
targetObj.addLink(INPUT, *cell.Source)
|
||||
ret.UpdateObj(targetObj, *cell.Target) // save back
|
||||
returned_wf.UpdateObj(targetObj, *cell.Target) // save back
|
||||
|
||||
// If we have a relationship of:
|
||||
// Source ----> Target
|
||||
@ -653,7 +666,7 @@ func (w Workspace) ConsumeMxGraphModel(xmlmodel MxGraphModel) (ret *Workflow, er
|
||||
// But we also must make sure that the Target will be in the OUTPUTs of the Source
|
||||
|
||||
sourceObj.addLink(OUTPUT, *cell.Target)
|
||||
ret.UpdateObj(sourceObj, *cell.Source)
|
||||
returned_wf.UpdateObj(sourceObj, *cell.Source)
|
||||
}
|
||||
|
||||
}
|
||||
@ -663,7 +676,9 @@ func (w Workspace) ConsumeMxGraphModel(xmlmodel MxGraphModel) (ret *Workflow, er
|
||||
dataslist := make(map[string]bool)
|
||||
// datalist := make(map[string]bool)
|
||||
|
||||
for _, comp := range ret.Computing {
|
||||
|
||||
// Test wether the computing componants are linked with a DC
|
||||
for _, comp := range returned_wf.Computing {
|
||||
if comp.DataCenterID == "" {
|
||||
issues = append(issues, errors.New("Computing "+*comp.getName()+" without a Datacenter"))
|
||||
} else {
|
||||
@ -673,14 +688,14 @@ func (w Workspace) ConsumeMxGraphModel(xmlmodel MxGraphModel) (ret *Workflow, er
|
||||
}
|
||||
|
||||
for _, dcin := range comp.Inputs {
|
||||
switch ret.GetResource(&dcin).getRtype() {
|
||||
switch returned_wf.GetResource(&dcin).getRtype() {
|
||||
case rtype.DATA:
|
||||
dataslist[dcin] = true
|
||||
}
|
||||
}
|
||||
|
||||
for _, dcout := range comp.Outputs {
|
||||
switch ret.GetResource(&dcout).getRtype() {
|
||||
switch returned_wf.GetResource(&dcout).getRtype() {
|
||||
case rtype.DATA:
|
||||
dataslist[dcout] = true
|
||||
}
|
||||
@ -688,23 +703,23 @@ func (w Workspace) ConsumeMxGraphModel(xmlmodel MxGraphModel) (ret *Workflow, er
|
||||
|
||||
}
|
||||
|
||||
for _, va := range ret.Storage {
|
||||
if va.Inputs == nil && va.Outputs == nil {
|
||||
issues = append(issues, errors.New("Storage "+*va.getName()+" without compatible inputs and outputs"))
|
||||
for _, storage_component := range returned_wf.Storage {
|
||||
if storage_component.Inputs == nil && storage_component.Outputs == nil {
|
||||
issues = append(issues, errors.New("Storage "+*storage_component.getName()+" without compatible inputs and outputs"))
|
||||
}
|
||||
}
|
||||
|
||||
for dcID, va := range ret.Datacenter {
|
||||
for dcID, dc_component := range returned_wf.Datacenter {
|
||||
// if rID doesn't exist in the list, it means that it's not used
|
||||
if _, ok := dcslist[dcID]; !ok {
|
||||
issues = append(issues, errors.New("DC "+*va.getName()+" not atached to any Computing"))
|
||||
issues = append(issues, errors.New("DC "+*dc_component.getName()+" not atached to any Computing"))
|
||||
}
|
||||
}
|
||||
|
||||
for dcID, va := range ret.Data {
|
||||
for dcID, data_component := range returned_wf.Data {
|
||||
// if rID doesn't exist in the list, it means that it's not used
|
||||
if _, ok := dataslist[dcID]; !ok {
|
||||
issues = append(issues, errors.New("Data "+*va.getName()+" not atached to any Computing"))
|
||||
issues = append(issues, errors.New("Data "+*data_component.getName()+" not atached to any Computing"))
|
||||
}
|
||||
}
|
||||
|
||||
@ -722,7 +737,7 @@ func (w Workspace) ConsumeMxGraphModel(xmlmodel MxGraphModel) (ret *Workflow, er
|
||||
// inputs AND Comp2 inputs with Comp1 outputs, since we are
|
||||
// iterating over all existent Computing models in the Graph
|
||||
|
||||
for _, comp := range ret.Computing {
|
||||
for _, comp := range returned_wf.Computing {
|
||||
|
||||
compModel, err2 := comp.getModel()
|
||||
if err = err2; err != nil {
|
||||
@ -744,7 +759,7 @@ func (w Workspace) ConsumeMxGraphModel(xmlmodel MxGraphModel) (ret *Workflow, er
|
||||
|
||||
//TODO: We should allow heterogenous inputs?
|
||||
for _, objIn := range comp.Inputs {
|
||||
resIn := ret.GetResource(&objIn)
|
||||
resIn := returned_wf.GetResource(&objIn)
|
||||
resInType := resIn.getRtype()
|
||||
switch resInType {
|
||||
case rtype.DATA:
|
||||
@ -781,7 +796,7 @@ func (w Workspace) ConsumeMxGraphModel(xmlmodel MxGraphModel) (ret *Workflow, er
|
||||
|
||||
//TODO: We should allow heterogenous outputs?
|
||||
for _, objOut := range comp.Outputs {
|
||||
resOut := ret.GetResource(&objOut)
|
||||
resOut := returned_wf.GetResource(&objOut)
|
||||
resOutType := resOut.getRtype()
|
||||
switch resOutType {
|
||||
case rtype.COMPUTING:
|
||||
|
@ -148,6 +148,94 @@
|
||||
"license": "GPLv3",
|
||||
"inputs": [],
|
||||
"outputs": []
|
||||
},
|
||||
{
|
||||
"name": "CURL",
|
||||
"image" : "curlimages/curl:7.88.1",
|
||||
"short_description": "Transfer or retrieve information from or to a server ",
|
||||
"logo": "./local_imgs/curl-logo.png",
|
||||
"description": "curl is a tool for transferring data from or to a server. It supports these protocols: DICT, FILE, FTP, FTPS, GOPHER, GOPHERS, HTTP, HTTPS, IMAP, IMAPS, LDAP, LDAPS, MQTT, POP3, POP3S, RTMP, RTMPS, RTSP, SCP, SFTP, SMB, SMBS, SMTP, SMTPS, TELNET, TFTP, WS and WSS.",
|
||||
"type": "computing",
|
||||
"owner": "IRT",
|
||||
"price": 300,
|
||||
"license": "GPLv2",
|
||||
"execution_requirements": {
|
||||
"cpus": 1,
|
||||
"ram": 1024,
|
||||
"storage": 300,
|
||||
"gpus": 1,
|
||||
"disk_io": "30 MB/s",
|
||||
"parallel": true,
|
||||
"scaling_model": 2
|
||||
},
|
||||
"inputs": [],
|
||||
"outputs": []
|
||||
},
|
||||
{
|
||||
"name": "alpine",
|
||||
"image" : "alpine:3.7",
|
||||
"short_description": "A minimal Docker image ",
|
||||
"logo": "./local_imgs/alpine-logo.png",
|
||||
"description": "Alpine Linux is a Linux distribution built around musl libc and BusyBox. The image is only 5 MB in size and has access to a package repository that is much more complete than other BusyBox based images. This makes Alpine Linux a great image base for utilities and even production applications",
|
||||
"type": "computing",
|
||||
"owner": "IRT",
|
||||
"price": 300,
|
||||
"license": "GPLv2",
|
||||
"execution_requirements": {
|
||||
"cpus": 1,
|
||||
"ram": 1024,
|
||||
"storage": 300,
|
||||
"gpus": 1,
|
||||
"disk_io": "30 MB/s",
|
||||
"parallel": true,
|
||||
"scaling_model": 2
|
||||
},
|
||||
"inputs": [],
|
||||
"outputs": []
|
||||
},
|
||||
{
|
||||
"name": "alpr",
|
||||
"image" : "openalpr/openalpr",
|
||||
"short_description": "Open source Automatic License Plate Recognition library.",
|
||||
"logo": "./local_imgs/alpr-logo.png",
|
||||
"description": "Deploy license plate and vehicle recognition with Rekor’s OpenALPR suite of solutions designed to provide invaluable vehicle intelligence which enhances business capabilities, automates tasks, and increases overall community safety!",
|
||||
"type": "computing",
|
||||
"owner": "IRT",
|
||||
"price": 300,
|
||||
"license": "GPLv2",
|
||||
"execution_requirements": {
|
||||
"cpus": 1,
|
||||
"ram": 1024,
|
||||
"storage": 300,
|
||||
"gpus": 1,
|
||||
"disk_io": "30 MB/s",
|
||||
"parallel": true,
|
||||
"scaling_model": 2
|
||||
},
|
||||
"inputs": [],
|
||||
"outputs": []
|
||||
},
|
||||
{
|
||||
"name": "imagemagic",
|
||||
"image" : "dpokidov/imagemagick:7.1.0-62-2",
|
||||
"short_description": "ImageMagick® is a free, open-source software suite, used for editing and manipulating digital images.",
|
||||
"logo": "./local_imgs/imagemagic-logo.png",
|
||||
"description": "Use ImageMagick to create, edit, compose, and convert digital images. Resize an image, crop it, change its shades and colors, add captions, and more.",
|
||||
"type": "computing",
|
||||
"owner": "IRT",
|
||||
"price": 300,
|
||||
"license": "GPLv2",
|
||||
"execution_requirements": {
|
||||
"cpus": 1,
|
||||
"ram": 1024,
|
||||
"storage": 300,
|
||||
"gpus": 1,
|
||||
"disk_io": "30 MB/s",
|
||||
"parallel": true,
|
||||
"scaling_model": 2
|
||||
},
|
||||
"inputs": [],
|
||||
"outputs": []
|
||||
}
|
||||
]
|
||||
},
|
||||
@ -158,7 +246,7 @@
|
||||
"name": "IRT risk database",
|
||||
"short_description": "IRT Database instance",
|
||||
"logo": "./local_imgs/IRT risk database.png",
|
||||
"description": "A very long description of what this data is",
|
||||
"description": "A very long description of what this storage is",
|
||||
"type": "database",
|
||||
"DCacronym": "DC_myDC",
|
||||
"size": 4000,
|
||||
@ -167,13 +255,14 @@
|
||||
"throughput": "r:200,w:150",
|
||||
"bookingPrice": 60,
|
||||
"inputs": [],
|
||||
"outputs": []
|
||||
"outputs": [],
|
||||
"URL" : ""
|
||||
},
|
||||
{
|
||||
"name": "IRT local file storage",
|
||||
"short_description": "S3 compliant IRT file storage",
|
||||
"logo": "./local_imgs/IRT local file storage.png",
|
||||
"description": "A very long description of what this data is",
|
||||
"description": "A very long description of what this storage is",
|
||||
"type": "storage",
|
||||
"DCacronym": "DC_myDC",
|
||||
"size": 40000,
|
||||
@ -182,7 +271,24 @@
|
||||
"throughput": "r:300,w:350",
|
||||
"bookingPrice": 90,
|
||||
"inputs": [],
|
||||
"outputs": []
|
||||
"outputs": [],
|
||||
"URL" : ""
|
||||
},
|
||||
{
|
||||
"name": "Mosquito server",
|
||||
"short_description": "open source message broker that implements the MQTT protocol versions 5.0, 3.1.1 and 3.1.",
|
||||
"logo": "./local_imgs/mosquitto-logo.png",
|
||||
"description": "A very long description of what this storage is",
|
||||
"type": "storage",
|
||||
"DCacronym": "DC_myDC",
|
||||
"size": 40000,
|
||||
"encryption": false,
|
||||
"redundancy": "RAID5S",
|
||||
"throughput": "r:300,w:350",
|
||||
"bookingPrice": 90,
|
||||
"inputs": [],
|
||||
"outputs": [],
|
||||
"URL" : ""
|
||||
}
|
||||
]
|
||||
},
|
||||
|
BIN
scripts/local_imgs/alpine-logo.png
Normal file
After Width: | Height: | Size: 5.3 KiB |
BIN
scripts/local_imgs/alpr-logo.png
Normal file
After Width: | Height: | Size: 14 KiB |
BIN
scripts/local_imgs/curl-logo.png
Normal file
After Width: | Height: | Size: 4.2 KiB |
BIN
scripts/local_imgs/imagemagic-logo.png
Normal file
After Width: | Height: | Size: 14 KiB |
BIN
scripts/local_imgs/mosquitto-logo.png
Normal file
After Width: | Height: | Size: 12 KiB |