~j3s/gosht

spooky capsul backend
format capsul create message more nicely
Add create capsul stencil

refs

master
browse  log 

clone

read-only
https://giit.cyberia.club/~j3s/gosht
read/write
git@giit.cyberia.club:~j3s/gosht

You can also use your local clone with git send-email.

#gosht

gosht it is a cool guy whomst looks at sql tables and runs commands that are put into the table, yep!

postgres and posix only, PL0X

discuss or send patches to the mailing list:

ops@cyberia.club

#extraneous info

currently capsul runs as a monolith, and as it grows it will need to be able to run somewhat "distributed".

Capsul has two "systems":

  1. api
  2. worker

#capsul-flask

capsul-flask is a python-based webapp that runs on the "capsul api" server. There can be only one capsul-api server for now.

capsul-flask is responsible for presenting a web interface to clients, reporting on information that is in the database, and submitting jobs to workers.

The submission jobs would go something like:

client POST -> python

INSERT INTO vms
(id, email, os, size, last_seen_ipv4, last_seen_ipv6,
created, deleted, placement, ssh_fingerprint, state)
VALUES (capsul-as9w8djads, j3s@c3f.net, alpine312, _, _)
2020-11-11, _, any, _, pending creation)

INSERT INTO jobs (operation, data)
VALUES (create-capsul, {"name":"capsul-as9w8djads","template":"alpine/3.12/root.img.qcow2","pubkeys":"key1\nkey2","cpus":"1","memory":"512"}

#capsul-worker

capsul-worker is a golang app that runs as the kvm user on every capsul worker.

capsul-worker basically queries a postgres table on an interval, and submits jobs based on the results of the query.

something like:

SELECT * FROM jobs WHERE processed_by = null AND where
(placement = 'atlanta-1' OR placement = 'any') LIMIT 1 FOR UPDATE;

the jobs table looks something like:

        jobs
id    operation     data
-------------------------------------------------
1     create-capsul  {"name":"capsul-as9w8djads","template":"alpine/3.12/root.img.qcow2","cpus":"1","memory":"512","pubkeys":"key1,key2"}

The job runner then has some logic like:

if operation == create-capsul && i have sufficient resources,
    ./create-capsul capsul-as9w8djads f1-xs
    <wait for capsul to come online>
    ./get-ip
    ./get-ssh-fingerprint
    UPDATE vms SET (state, datacenter, last-seen-ipv4, ssh-fingerprint)...
    UPDATE jobs SET processed_by = 'baikal' WHERE ...;

and that's it!

other operations might be like:

id   processed_by placement   operation            data
---------------------------------------------------------------------------------------------
99   baikal       baikal      destroy-capsul       {"name":"capsul-siod9wdsdd"}
100  baikal       baikal      resize-capsul        {"name":"capsul-siod9wdsdd","size:"f1-m"}
100  baikal       any         stop-capsul          {"name":"capsul-siod9wdsdd","force":false}
101  baikal       any         stop-capsul          {"name":"capsul-siod9wdsdd","force":true}
102  baikal       any         start-capsul         {"name":"capsul-siod9wdsdd"}
103  baikal       any         update-capsul-ip     {"name":"capsul-siod9wdsdd"}

questions include:

how would we make sure the right requests are routed to the right capsul hosts? (baikal cannot destroy capsuls that arent on it)

#database commands

# required submission fields: placement, operation, data

CREATE DATABASE capsul-flask;
CREATE TABLE job (
  id serial primary key,
  submitted_at timestamp default current_timestamp,
  processed_at timestamp,
  processed_by varchar(255),
  placement varchar(255) not null,
  operation varchar(255) not null,
  result varchar(255),
  data json not null
);
INSERT INTO job (placement, operation, data)
VALUES('any', 'say-hello', '{"string":"hello"}');