# Daestro > [Daestro](https://daestro.com) is a cloud-agnostic orchestration platform designed to run compute workloads across multiple cloud providers and self-hosted infrastructure. ## Links [Documentation](https://daestro.com/docs) [Blog](https://daestro.com/blog) [Pricing](https://daestro.com/pricing) [sitemap.xml](https://daestro.com/sitemap.xml) ## Features - Cloud agnostic, works with AWS, DigitalOcean, Linode, Vultr and even on-prem - Jobs can even be run on your own machine, either be laptop or enterprise server - View the logs for each job run in realtime or after job has completed - View metrics like CPU and Memory usage of each job run - Set fine grained controls on concurrency, priority and resource usage for your jobs - Use job queues to make sure high priority jobs get processed first - Cron jobs: Run recurring jobs on specified time - Schedule jobs to run at later time or date - Set custom CPU and Memory quota per job definition - Supports Docker image and arbitrary bash code - Jobs can be submitted from either Console (dashboard) or API - All the secrets and API keys are encrypted with AES-256 ## Console [Daestro Console](https://console.daestro.com) is the dashboard that let's user signup using Github or Google Oauth to access Daestro platform. ## Cloud Provider Daestro supports AWS, Vultr, DigitalOcean and Linode on it's platform. User can add the API for the cloud provider which they want to use to run jobs on. ## Cloud Auth Cloud providers i.e. AWS, Vultr, DigitalOcean and Linode are the first class citizen of the Daestro. Daestro directly works on the APIs of these cloud providers to provide complete integration. To do what Daestro does, it needs the API Key from these cloud providers to access resources on behalf of user account. User must create new **Cloud Auth** with the API key. Since Daestro uses users' cloud account to run compute, it is obvious that the usage cost of the server will fall upon user. Daestro only charges the platform use fee. All the other resource usage fee has to be bear by user. ### Creating Cloud Auth - Go to [Cloud Auth](https://console.daestro.com/cloud-auth) page - Click on [Add Cloud Auth](https://console.daestro.com/cloud-auth/add) - There will be shown a form which first asks to choose a **Cloud Providers** - Upon selecting a cloud provider more fields will be asked to be filled, such as in terms of AWS it will ask for "Access Key Id" and "Secret Access Key" while for Others (DigitalOcean, Linode, Vultr) it will ask for "Access Token" - After filling respective API keys user must give this cloud auth a meaningful name - API Keys are encrypted (using AES-GCM 256) and then stored in the db. ### Updating Cloud Auth - User can update name of the Cloud Auth - User can also update keys of the Cloud Auth upon confirming that the keys which are updated belongs to the same cloud provider account and if it does not then user owns the risk of service disruption due to mismatch of keys ### Deleting Cloud Auth - User can delete a Cloud Auth if there are no **Compute Environment** associated with it ## Compute Environment A specification defining the type of compute instance (VM), its location (region/zone), and the associated Cloud Auth. Think of it as a template for launching servers. ### Creating Compute Environment - Go to [Compute Environment](https://console.daestro.com/compute-env) - Click on [Add New](https://console.daestro.com/compute-env/add) - Select: Cloud Auth -> Among all the cloud auth user has created they will also see a special "Self-hosted Compute" option in the cloud auth list. - When Selected "Self-hosted Compute" - Name (Required): Identifiable name for the Compute Environment - Usable CPU Quota Percentage (Optional): Percentage of CPU core to be used for running jobs. 100% means it will use 1 full core, likewise 50% = 0.5 core, 400% = 4 core. Minimum value can be 10%, leave it blank to use full system capability. - Usable Memory Quota (MB) (Optional): Puts a constraint on Memory usage to be alloted to jobs. - When they select other Cloud Auth that they created for Cloud Providers they will need to select Instance Familly, Instance Type and Location for the compute environment to create. - When user selects "Self-hosted Compute" option they they can use this type of compute environment to run jobs on their own machine (laptops/desktops/on-prem/vps) by creating Compute Spawn and running the Daestro Agent docker container with given Auth Token (it can be generated from self-hosted compute environment page). - Name is compulsory in all cases. ### Updating Compute Environment User can only update Name and Activate/Deactivate the state. ### Deleting Compute Environment Conditions to delete Compute Environment: It must be in disabled state and there must not be any active Compute Spawn associated with it. ## Job Queue A prioritized queue that manages the execution of jobs. It controls concurrency, priority, and the Compute Environments used for running jobs. Job Queues can only be enabled when there is at least one Compute Environment associated with it. ### Creating Job Queue - Go to [Job Queue](https://console.daestro.com/job-queue) page - Click on [Create Job Queue](https://console.daestro.com/job-queue/add) - Form fields - Name (Required): Name must be 1-128 characters. Valid characters are a-z, A-Z, 0-9 and hyphens (-). Must be unique within user account. - Description (Optional) - Priority (Required): Priority must be between 1-1000. Lower number equals higher priority when executing jobs. - Max Concurrency (Required): 0 means no limit. Dictates how many jobs can be processed concurrently for a Job Queue. - Max Compute Spawn (Required): 0 means no limit. Limits the number of instances that can be spawned in a Job Queue. - Max Idle Time (in seconds) (Required): Controls how long a Compute Spawn can be idle (no job) before it is terminated. - SSH Public Key (Optional): User can add their SSH public key here to log into the Compute Spawns. (e.g. ssh daestro@) - When submitted with valid field values it creates a new Job Queue in disabled state. ### Adding / Deleting Compute Environment in Job Queue From the Job Queue Detail Page, User can add and delete Compute Environments based on their need. Having multiple compute environments per Job Queue is recommended if running large number of jobs. Currently compute environments are selected using round robin for new compute spawn if job has no CPU and Memory quota defined. If job (job definitoin) has CPU and Memory quota defined then compute environment are sorted based on best fit for the required resources. ### Updating Job Queue Only Description, Priority, Max Concurrency, Max Idle Time and Active state can be edited after job queue creation. ### Deleting Job Queue - can be deleted from Job Queue detail page - Conditions: - there are no active compute spawns associated with it - no Cron Job is using the Job Queue ## Compute Spawn A compute spawn is a Server/Compute linked to the Daestro. A compute spawn is created within a Job Queue using compute environment that is part of job queue. Types of compute spawn: - selfhosted: when user manually manages the compute and links the daestro agent using Self-hosted Compute Environment just for running jobs - cloud: Daestro manages cloud based compute spawn using Compute Environment linked with cloud auth. Daestro handles provisioning, scaling and termination for compute spawns of this type. A compute spawn has following states: - failed -> when Daestro fails to create compute spawn from cloud auth - initialized -> when user creates compute spawn for "self-hosted compute" type compute env, it's default state is initialized - requested -> when compute spawn of type "cloud" is requested using cloud auth, it's stay in requested state until the agents gets installed in the compute spawn and it pings - idle -> daestro agent is live and pinging back and has no job assigned - busy -> when compute spawn is running job - unreachable -> daestro agent is not pinging back to Daestro server for more than 5 mins - terminating -> when compute spawn is queued up for termination - terminated -> compute spawn has been terminated - termination_failed -> failed to terminate the compute spawn ### Creating Self-hosted Compute Spawn - Daestro provides "Daestro Agent", which you can use to link your compute (laptop, server, etc.) with Daestro and run jobs - In order to link your compute first you need the "Agent Auth Token" - Creating Agent Auth Token - Open the Compute Environment which is of type "Selfhosted" - Then in the "Compute Spawn" section click on "New Compute" - All the Job Queues that Compute Environment is part of will be listed - Select the one with which you want to associate compute spawn with - Your "Agent Auth Token" will be generated - Running "Daestro Agent" via Docker - easiest way to run Daestro Agent is via Docker - since Daestro uses Docker itself to run your jobs, it will need access to docker socket, which is generally located in `/var/run/docker.sock` - Daestro Agent stores state data locally so it is recommended to bind the path with docker volume so that on restarts no data is lost - `docker run --name daestro-agent -e DAESTRO_AUTH_TOKEN="" -v /var/run/docker.sock:/var/run/docker.sock -v daestro_agent_data:/var/lib/daestro-agent --network host daestro/daestro-agent:latest` ### Running multiple jobs on Compute Spawn - Multiple jobs can be run on a Compute Spawn however there are some conditions. - The job definition that is to be run must have CPU Quota and Memory Quota defined. - If defined quotas are less than Compute Spawn's capacity then the job can be run and any other job which meets the remaining capacity of the compute spawn can also be run. - If no quota has been defined in Job Definition then it will use full capacity of Compute Spawn and no other jobs can be run simultaneously on that Compute Spawn. - If there is a Job Definition which has higher CPU and Memory Quota defined than the Compute Spawn which is part of the Job Queue, then in this case if Compute Spawn is not running any job then this job with higher quota can be assigned and it will use full Compute Spawn capacity. - Quotas are mere constraint, which facilitates to run multiple jobs on one Compute Spawn. If Compute Spawn is part of the job queue and a job has higher quota requirement then the compute spawn, even then job can be run since it's alreay part of the job queue. ## Container Registry Auth Credentials used to access private container registries (like Docker Hub or private registries) to pull your application’s Docker images. ### Creating Container Registry Auth - Go to [Container Registry Auth](https://console.daestro.com/container-registry-auth) - Click on [Add New](https://console.daestro.com/container-registry-auth/add) - Form fields - Name (Required) - give meaningful name - Registry Url (Optional) - If using image from anything other than Dockerhub then this field must be filled - Username - Password / Personal Access Token - Both registry url and credential (username and password) cannot be blank. ### Updating Container Registry Auth Once created it cannot be edited. ### Deleting Container Registry Auth User can delete Container Registry Auth if it's not associated with any Job Definition. ## Job Definition A blueprint for your job, specifying the Docker image, commands, resource requirements, and other settings. ### Creating Job Definition - Go to [Job Definition](https://console.daestro.com/job-definition) - Click on [Create Job Definition](https://console.daestro.com/job-definition/add) - First select the type of Job Definition: Docker or Bash Script - Docker - Form fields - Name (Required): Name must be 1-128 characters. Valid characters are a-z, A-Z, 0-9 and hyphens (-). Must be unique within account. - Docker Image (Required): name of the docker image to used - Container Registry Auth (Optional): Select from the list of added Container Registry Auth, if applicable. - Execution Timeout (seconds) (Required): Controls how long a job can run before it is cancelled. 0 means indefinite. - Command (Optional): Command to run in the container. Spread it over multiple fields like an array. Use `Param::` to pass custom parameters to the command. Which can be dynamically set by the command parameters. - Command Parameters (Optional): Can be used to override the command parameters in the command, either in job definition or at the time of job submission. - Environment Variables (Optional): All values will be encrypted. Empty strings are not allowed and will be removed. If you mark any value as sensitive then you won't be able to view it's value after saving. - CPU Quota Percentage (Optional): Percentage of CPU core to use. 100% means it will use 1 full core, likewise 50% = 0.5 core, 400% = 4 core. Minimum value can be 10%, leave it blank to use full system capability. - Memory Quota (Optional): Puts a constraint on Memory usage for your container. Jobs can be killed due to OOM if value is set lesser than your container needs. Set it after proper testing. - Privileged (checkbox): Gives extended privileges to the container. Not recommended unless you know what you are doing. - Bash Script - Form fields - Name (Required): Name must be 1-128 characters. Valid characters are a-z, A-Z, 0-9 and hyphens (-). Must be unique within account. - Docker Image (Not editable): Will use `ubuntu:24.04` by default - Execution Timeout (seconds) (Required): Controls how long a job can run before it is cancelled. 0 means indefinite. - Bash Script: User can write their Bash Script code here for daestro to run on the container. - Environment Variables (Optional): All values will be encrypted. Empty strings are not allowed and will be removed. If you mark any value as sensitive then you won't be able to view it's value after saving. - CPU Quota Percentage (Optional): Percentage of CPU core to use. 100% means it will use 1 full core, likewise 50% = 0.5 core, 400% = 4 core. Minimum value can be 10%, leave it blank to use full system capability. - Memory Quota (Optional): Puts a constraint on Memory usage for your container. Jobs can be killed due to OOM if value is set lesser than your container needs. Set it after proper testing. - Privileged (checkbox): Gives extended privileges to the container. Not recommended unless you know what you are doing. ### Notes: - User either has to fill both CPU Quota and Memory Quota field OR leave them both blank - By setting CPU Quota and Memory Quota multiple jobs can be run on the same Compute Spawn, given that Compute Spawn has quota available for additional job ### Editing Job Definition Job definitions cannot be edited directly, however you can create revisions of the Job Definition which will create a new job definition with the same name and incremented version number. ### Job Definition Revision User can update all the fields except "Name" while creating revision for Job Definition. Revised job definition will have same name with incremented version number. This is a great way to update the job definition without interrupting the currently running jobs. It also let's you experiment in safe way. ### Deleting Job Definition Conditions: - There are no jobs running - No cron job associated with this job definition ## Job A runnable instance of the Job definition. ### Creating Job - Go to [Add New Job](https://console.daestro.com/job/new) from Dashboard - Form fields - Name (Optional): Letters, numbers, dashes only. If left blank, a random name will be generated. Must be unique within Job queue. - Job Definition (Required): Select one from the list of active job definition available in your account. - Job Queue (Required): Select one from the list of active job queue available in your account - Command (Optional): This will override the Job Definitions command value - Command Parameters (Optional): This will override the command parameter value in Job Definition or will be used as it is to substitute in Command - Environment Variables (Optional): Will override the one in job definition if key is same otherwise will be used as it is. - Schedule At (Optional): Select date time to run jobs later on at specified time. Leave it blank to run immediately. ### Cancel Job Jobs which are in queue or running can be cancelled from Job's page. ### Re-submit Job Jobs can only be re-submitted to run again when it's not in queue or in running state. ## Cron Jobs User can create **Cron Jobs** on Daestro, which will execute job on given schedule. The minimum interval in which it will run is 1 minute. After creating Cron Job, user can see it's estimated upcoming triggers in it's page. ### Creating Cron Jobs - Go to [Cron Jobs](https://console.daestro.com/cron-job) page - then open [New Cron Job](https://console.daestro.com/cron-job/form) - Form fields - Cron Expression (Required) - Follows Quartz Scheduler - `second minute hour dayOfMonth month dayOfWeek year(optional)` - Even if cron expression is set to run every second, the minimum interval it can have if of 1 minute - Job Definition (Required) - Job Queue (Required) - Name (Required) - Description (Optional) ### Updating Cron Job - Cron Job can be updated from Cron Job's detail page - Cron Job can be enabled or dislabled anytime - Following fields can be updated - Cron Expression - Name - Description ### Deleting Cron Job Cron job can be deleted from it's page itself. ## API The Daestro API provides programmatic access to interact with Daestro services through a set of RESTful endpoints. To authenticate your API requests, you'll need to obtain an API key from the Daestro Console. This key must be included in the authorization header of all API requests. **Base Url** ```bash https://api.daestro.com ``` **Header** ```bash Authorization: Bearer YOUR_API_KEY Content-Type: application/json ``` ### Create API Key You'll need to obtain an API Key to authenticate yourself. You can easily create one from the Console. Just login to Console and go to [API Keys](https://console.daestro.com/settings/api-key) page from the sidebar. Each request will need to have api key in the `Authorization` header. ```bash Authorization: Bearer YOUR_API_KEY ``` ### API Reference [Job Submit API](https://daestro.com/docs/api-reference/job-submit) [Job Detail API](https://daestro.com/docs/api-reference/job-detail) [Job Cancel API](https://daestro.com/docs/api-reference/job-cancel)