This page summarises completed tasks, and current round of tasks within the NLnet grant for the A12-Directory improvements.
The overall purpose is to get the A12 protocol, reference tooling and implementation to a point of making the visions of a. "Many Devices, One Desktop" and b. "Your Desktop, reaching out" a practical reality. This means that the set of devices, ranging from full blown desktop, to home servers, to single-board computers should be able to work in unison - share load and individual capabilities. With that, it should also open up for extended collaboration, letting you compartment work and invite others to participate in a secure and accessible manner.
Milestone 1:
Status: Completed
Linking local and remote development
Before being able to let proof-of-concept applications drive feature selection and design details, we needed some basic building blocks. First was to make sure that the script collection running on a user facing device (appl) could communicate with others within your device network, particularly the home server used for coordination (directory).
This was implemented by extending the 'open_nonblock' function call namespace facility. This was previously used through
the arcan_db tool:
arcan_db add_appl_kv arcan ns_myns Home:rw:/home/someuser
Then the scripts could call list_namespaces and see 'Home' as the user presentable label, and 'myns' as the reference
for the namespace. This could then be used with functions like open_nonblock("myns:/something"). The extension made here
is that there is a reserved 'a12', with a . prefix for special files. To list files exposed by a controller (directory
server side set of scripts with a matching name to the local), one can open_nonblock("a12:/.index") and then read out
a listing and then use further calls to open_nonblock("a12:/somefileorhash") to stream a file from the network of
directory servers.
There are also user private ones, available by using the reference handle from net_open("@stdin"):
a = net_open("@stdin")
nbio = open_nonblock(a, ".index")
This would retrieve the index for the private store tied to the authentication key used when running from the directory, e.g.
arcan-net --put-file mydirectory@ some.file
arcan-net mydirectory@ myappl
(inside script of myappl using net_open as above), open_nonblock(a, "some.file")
Modify protocol and reference implementation to support linking directory servers together
This was implemented by an extension to the 'config.lua' script one would use to configure the directory server, e.g.
arcan-net -c config.lua. We added an entry-point for when the server was finished configuring, called ready, and a
function link_directory.
function ready()
link_directory("dd", function(source, status) end)
end
Where 'dd' was previously defined in the keystore (e.g. arcan-net dd arcan.divergent-desktop.org). This requires the
remote end to both permit someone to make a link to it: config.permissions.link = 'sometag'. The handler function
provides feedback on link status (if a connection couldn't be made, or was dropped) so that the configuration script
can react accordingly.
We ended up with two types of links, a referential and a unified. The referential requires less permission as it simply lets one directory server route traffic to another:
arcan-net myserv@ dd/someappl
Which would connect through 'myserv', access the 'dd' link and run 'someappl' from there. The unified form doesn't expose that there is a connection, all file access and appl- messaging is handled transparently.
Configuration and tool modification to permit or revoke access for specific directory server links
After several failed prototypes, we settled on exposing admin functions via another entrypoint in the config script,
admin_command to let the server administration have one access interface for all current and future administration
features.
config.lua:
function admin_command(client, command)
if command == "link" then
link_directory(string.sub(command, 6), some_handler)
client:write("ok\n")
end
end
If the authentication key has config.permissions.admin, the arcan-net tool can be used to route commands there:
arcan-net --admin-ctrl myserv@
Then inputs on stdio would be routed to the admin_command handler and any written results be sent back to stdout.
This can then be used to modify permissions, assign or remove tags to an active client (available via the register
and register_unknown entrypoints).
Milestone 2
Status: Completed
Extend protocol and reference implementation with support for signed file and state store
The 'REKEY' facility in the protocol that is used for stepping the ratchet that provides forward secrecy got a mode where the client can assign a signing key identity to complement the authentication one. This is done by signing a challenge that the server provided after authentication with a signing key.
The tooling side of this looks simple:
arcan-net --sign-tag sometag --push-appl myappl mydir@
The sign-tag argument will first complete the REKEY part to prove ownership of the key, then transfer operations
made will apply a signature to the header. For the --push-appl form above, this will extend the manifest (version,
permissions, ...) for the appl with the public part of the key, a signature of the header and a signature of the
data block.
When running an appl, arcan-net mydir@ myappl will then verify that the signatures match the key, and refuse to
run if it doesn't.
Add debugging controls for synchronous stepping local application execution with server side processing
After trying, and failing, to implement the beefy 'debug adapter protocol' spec (which we have a UI and client
implementation for in Cat9) we decided to modify the monitor (src/arcan_monitor.c) interface to the main engine
to implement a simpler protocol, as well as an implementation for that in Cat9:
builtin dev
debug launch arcan someappl
(or attaching via an established socket, debug attach arcan /path/to/socket). The same interface was then added to the server-side controller (protocol-wise it's a datastream via the developer permission '.debug' resource along with some added VM / process control to communicate across the sandbox).
The protocol covers all the expected 'stepnext', 'stepinstruction', 'stepcall', 'stepend', 'locals', 'breakpoint', 'eval', 'dumpkeys', 'backtrace', 'source' and so on (src/a12/net/dir_lua_support).
The arcan-net tool can then use arcan-net --debug-appl mydir@ myappl and stdio is rerouted across this.
Server side application support for launching dynamic sources
Both the server config script and the appl controller scripts got a launch_target call for running a database
defined target:
arcan_db add_target mybin BIN /usr/bin/Xarcan -redirect -exec chromium
Something like launch_target("somename", "mybin") would generate ephemeral keys and mark them as temporarily
trusted for acting as a data source, then launch the binary over arcan-net as a loopback connection.
Then depending on if "somename" is a user presentable name or a reference to an existing connection, it'll either register a publicly available source, or a scoped one only visible and accessible by a specific user. In that case the client will also be notified that there is a dynamic source available for immediate sinking.
We also added a server config to 'host' Arcan appls themselves. This requires the server to have the arcan_lwa
executable (which is a simplified form of the engine that can't control display). If the server has permission,
a client can:
arcan-net dd@ "|myappl"
Instead of downloading and running myappl locally, the server will spin up an instance of arcan_lwa running
myappl, with access to the user's private state store. This connects as a new restricted source directed toward
the connection that made the request, and arcan-net will source it. This lets the simplified 'smash' viewer stream
any arcan appl without having the rest of the stack available.
Server side support for triaging and collecting client side crash dumps and snapshots
When a client running an appl runs into a failed exit (script crash), arcan-net collects information,
packages and sends to a pre-reserved server-side private store slot.
This has been combined with a flush_report function call available to the admin script, as well as
'.report' file available to a developer or controller script that is generated dynamically by combining
all user submitted reports, together with log reports from the server VM.
Milestone 3
Status: completed
Support source-sink crash/disconnect resumption
This took a substantial refactor of how arcan-net hosts sources, e.g.
arcan-net -l 6680 -- /some/arcan/shmif/client
It now splits into a separate arcan-net-session binary. This tracks the connection status for a
hosted source, and if the source is alive when a connection is terminated, it is kept alive in a
dormant state but paired to the authentication key used by the sink.
When a new connection arrives, the authentication key is checked against the set of pending sources, and if there is a match, the source is told to reset to a 'wm tracking lost' state (renegotiate colours, subwindows and so on).
Allowing multiple sources to access a single sink (broadcast)
This extends on the 'crash/disconnect resumption' feature by adding a --cast argument:
arcan-net --cast -l 6680 -- /some/arcan/shmif/client
The first client that connects gets the /some/arcan/shmif/client source to sink and 'drive' the connection. Internally this spins up a framecache that tracks video buffer encoding state. When new clients connect, they are routed through this framecache (which also instructs the primary connection to try and quickly get to new keyframes to reduce initial delay).
API for server-side application key/value store access
This has been implemented for the config script scope and for the controller script scope. The later was more complicated as all calls has to go across the sandbox barrier since it doesn't have file-system access.
The functions themselves look and behave like local Arcan appl match_keys, store_key, get_key. The
big change for the controller script side is that the lookups are asynchronous. This is necessary due to
the sandbox and that the keys themselves may be distributed across a network of linked directories.
API for application driven resource indexing
Normally the open_nonblock(ref, ".index") call routes through a glob as per the first milestone. This
request can be intercepted by the controller script and remapped to a server defined name transparently.
The plan for this is to combine with pluggable services for routing/caching through other means, e.g.
IPFS, torrent or regular https.
If the controller script implements the _index hook:
function myappl_index(client, nbio)
end
The actual stream returned is now entirely controlled by nbio:write calls. Other get and put requests
are handled similarly so that the scripts can run it through higher level description generation like a
LLM creating a textual representation of an image or OCR retrieval of text.
Search and retrieve resources based on description, hash and signature (completed)
This was solved by having a METADATA slot for an upload, and a .index upload slot which acts as a template for how future .index downloads are generated. The .index format is simply a set of keys that would cause inclusion, e.g:
description:match=Family Vacation type=jpeg keyword=sun,beach
Much of the load in generating the search index would be application specific, but the arcan-net CLI tool allows a --get-file .priv .index - --put-file .priv .index - that would first submit the filter index from STDIN and then stream the filtered index to STDOUT.
API for publishing / unpublishing / mirroring a resource across the directory network
This was partially solved for on the server side controller script being able to open_nonblock and if it doesn't fit locally, go through the link- worker that maintains the uplink to another directory server node, and that worker gets to manage cache before forwarding a request.
The more useful part was by allowing config.lua to define external resolvers and attach those to various runners:
local resolver = launch_resolver("/path/to/some_resolver", "some", "arguments")
set_resolver(resolver) -- would become the default
set_resolver("myctrl", resolver) -- overrides the default for 'myctrl' specifically
Then file requests from the runner of a specific controller would route the requests to it. The 'some_resolver' external binary is a regular SHMIF client that handles BCHUNK_IN/BCHUNK_OUT requests.
There is a test resolver in tests/core/a12resolve that merely checks against a local folder. For more substantial use an additional one will be created that wraps curl/libfetch/IPFS as part of the PoC chat application.
Milestone 4 - Proof of Concept Applications
Status: ongoing
This covers PoC applications with both a controller side application and a client for some example categories: game, presentation, chat
Together with a larger dissemination piece of how all this fits together. (Completed, not yet published)
Milestone 5 - Increase user security / safety / agency
Verify sandboxing and mitigation across ports
(ongoing) mainly missing capsicum on FreeBSD and landlock on Linux. The parts that have been done is splitting all processing into incrementally more fine-grained processes, both for maintaining links, running controllers, etc.
Add support for FIPS 203 ML-KEM-768 Rekey (completed)
This is now enabled opt-in both server and client side. By using the --rekey-pqc the REKEY commands for chunked transfer of an ephemeral public key and the expected ciphertext reply will be sent when a connection is established.
Milestone 6 -
Update protocol specification to reflect protocol changes for linking, multiple sources and integrity (completed)
completed - this was performed incrementally as the other Milestones were processed.
Extend reference window manager (Durden) networking tool to support issue reporting / tracking
(ongoing) this will take the durden crash collection widget and add a button to submit the crash as a file upload, getting a ticket identifier back, logging the identifier for deduplication and checking if it's solved or not.
The rest will come as a cat9 builtin.
Track multiple deployed versions of an application and support rollback for a broken deployment
(ongoing) this will be handled by first migrating storage of applications to a packaged form (done), then store the packages incrementally named and a symlink to the currently distributed version and an admin command / config.lua api for switching the version.
Update Client API Documentation to follow a uniform format for generating type-safe bindings
(ongoing) this will be handled by a cat9 documentation viewer that has a parser of the existing doc/.lua files and alerts on those that fail to parse, spawning an editor allowing quick update.
Add Server Side API Documentation
(ongoing) comes in two forms, doc/a12_directory.md and doc/ctrl/.lua which will follow the same strategy as the client documentation.
CLI tool for building turnkey bootable / VM image for turning a device into a source or sink
(ongoing) this will come as a cat9 system builtin for specifying the template, reference appl, keystore, default directory - and then have a pluggable distribution based backend with one written using void-mklive.
Summary blog post, demonstration video and user introduction guide
incomplete - will be added last.
Assimilation of Accessibility Review Comments
incomplete - pending review and blocked on the image builder.
Assimilation of Security Review Comments
incomplete - pending review and blocked on the verification of mitigations.