A Day in a Pile of Work

My personal Web development blog

Toward a first release in Valum

I’m working on the beta release that should bring minor improvements and more definitive APIs.

  • JSON example and documentation with json-glib
  • final renaming to ensure a quality and elegant API
  • CGI and SCGI implementations

The next step is a stable 0.2.0 release which should happen in the coming weeks.

  • RPM packaging and distribution (see Valum on COPR)
  • Docker container example using the RPM package

Invocation in the Router context

This feature was missing from the last release and solves the issue of calling next when performing asynchronous operations.

When an async function is called, the callback that will process its result does not execute in the routing context and, consequently, does not benefit from any form of status handling.

app.get ("", (req, res, next) => {
    res.body.write_async ("Hello world!".data, () => {
        next (); // if next throws anything, it's lost
    });
});

What invoke brings is the possibility to invoke a NextCallback in the context of any Router, typically the current one.

app.get ("", (req, res, next) => {
    res.body.write_async ("Hello world!".data, () => {
        app.invoke (req, res, next);
    });
});

It respects the HandlerCallback delegate and can thus be used as a handling middleware with the interesting property of providing an execution context for any pair of Request and Response.

The following example will redirect the client as if the redirection was thrown from the API router, which might possibly handle redirection in a particular manner.

app.get ("api", (req, res) => {
    // redirect old api calls
    api.invoke (req, res, () => { throw new Redirection ("http://api.example.com"); })
});

As we can see, it offers the possibility of executing any NextCallback in any routing context that we might need and reuse behiaviours instead of reimplementing them.

RPM packaging

I wrote a specfile for RPM packaging so that we can distribute the framework on RPM-based distributions like Fedora and openSUSE. The idea is to eventually offer the possibility to install Valum in a Docker container to facilitate the deployment of web services and applications.

I have literally no knowledge about Debian packaging, so if you would like to give me some help on that, I would appreciate.

Posted on .

Ninth week update (from 15/06/15 to 26/06/15) in Valum

The past week were higly productive and I managed to release the 0.2.0-alpha with a very nice set of features.

There are 38 commits separating v0.1.4-alpha and v0.2.0-alpha tags and they contain a lot of work.

Click the arrow to see the 38 commits descriptions. 1f9f7da Version bump to 0.2.0-alpha. 0ecbf22 Fixes 9 warnings for the VSGI implementations. 64df1f5 Test for the ChunkedConverter. cc9f320 Fixes libsoup-2.4 (<2.50) output stream operations. de27c7b Renames Server.application for Server.handle. f302255 Improvments for the documentation. c6587a0 Removes reference to connection in Response. 5232c48 Write the head when a status is handled to avoid an empty message. 61961ee Fixes the code formatting for the handling process. 05aca66 Improves the documentation about asynchronous processing. 41ce7a7 Removes timeout as it is not usable with the processing model. 42bbfc7 Updates waf to 1.8.11 and uses the valac threading fix. 7588cf2 Exposes head_written as a property in Response. 4754131 Merge branch '0.2/redesign-async-model' 409e920 Updates the documentation with 0.2/* changes. acf6200 Explicitly closes with async operations. 22d8c54 Provides SimpleIOStream for gio-2.0 (<2.44). eeaeee1 Considers 0-sized chunk as ending chunk in ChunkedConverter. 86b0752 Provides write_head and write_head_async to write status line and headers. 11e3a34 Uses reference counting to free a request resources. 19dc0e2 Replaces VSGI.Application with a delegate. 53e6f24 Ignores the HTTP query in 'REQUEST_URI' environment variable. 5596323 Improves testability of FastCGI implementation and provide basic tests. fb6b0c4 FastCGI streams uses polling to perform read and write operations. 3ec5a22 Tests for the libsoup-2.4 implementation of VSGI. 869fe00 Fixes 2 compilation warnings. dfc98e4 Adds a transparent gzip compression example using ZLibCompressor. 79f9b18 Write request http_version in response status line. 937ceb2 Avoid relying on states to write status line and headers. e73ebd6 Properly close the request and response body in end default handler. fc57bf0 Documentation for converters. 384fd97 Reimplements chunked streams with a Converter. c7c718c Renames raw_body of base_stream in Response. 0479739 steal_connection is available with libsoup-2.4 (>=2.50) 2a25d8d Set Transfer-Encoding for chunked in Router setup with HTTP/1.1. c8ebad7 Writes status line and headers in end if it's not already done. 1678cb1 Uses ChunkedOutputStream by default in Response base implementation. bb10337 Uses real stream by VSGI implementations.

Some of these commits were introduced prior to the seventh week update, the exact date can be checked from GitHub.

In summary,

  • asynchronous processing with RAII
  • steal the connection for libsoup-2.4 implementation
  • write_head and write_head_async
  • head_written property to check if the status line and headers has been written in the response

I am working toward a stable release with that serie, the following releases will bring features around a solid core in a backward-compatible manner.

Three key points for what’s coming next:

  • middlewares
  • components
  • documentation
  • distribution (RPM, Debian, …)
  • deployment (Docker, Heroku, …)

Finishing the APIs

All the features are there, but I really want to take some time to clean the APIs, especially ensuring that naming is correct to make a nice stable release.

I have also seeked feedback on Vala mailing list, so that I can get some reviews on the current code.

Asserting backward compatibility

Eventually, the 0.2.0 will be released and marked stable. At this point, we will have a considerable testsuite that can be used against following releases.

According to semantic versionning (the releasing model we follow), any hoftix or minor releases has to guarantee backward-compatibility and it can easily be verified by running the testsuite from the preceeding release against the new one.

Once we will have an initial stable release, it would be great to setup a hook to run the older suites in the CI.

What’s next?

There is a lot of work in order to make Valum complete and I might not be done by the summer. However, I can get it production-grade and usable for sure.

CGI and SCGI implementations are already working and I will integrate them in the 0.3.0 along with some middlewares.

Middlewares are these little piece of processing that makes routing fun and they were thouroughly described in the past posts. The following features will make a really good start:

  • content negociation
  • internationalization (extract the domain from a request)
  • authentication (basic, digest, OAuth)
  • cache (E-Tag, Last-Modified, …)
  • static resource serving from File or Resource
  • jsonify GObject

They will be gradually implemented in minor releases, but first they must be thought out as there won’t be no going-backs.

I plan to work a lot on optimizing the current code a step further by passing it in Valgrind and identify the CPU, memory and I/O bottlenecks. The improvments can be released in hotfixes.

Valum will be distributed on two popular Linux distributions at first: Ubuntu and Fedora. I personally use Fedora and it would be a really great platform for that purpose as it ships very innovative open source software.

Once distributed, it will be possible to install the software package in a container like Docker or a hosting service like Heroku and make development a pleasing process and large-scale deployment possible.

Posted on .

Release for tomorrow! in Valum

Tomorrow, I will be releasing the first version of the 0.2 serie of Valum web micro-framework.

There’s still a couple of tests to make to ensure that everything is working perfectly and I will distribute the 0.2.0-alpha.

This serie will target an eventually stable release after which rules will be enforced to ensure API compatibility. It will definately mark the end of the last days prototyping.

Good night y’all!

Posted on .

0.1.4-alpha released! in Valum

I am happy to announce the release of a 0.1.4-alpha version of Valum web micro-framework that bring minor improvments and complete CLI options for VSGI.Soup.

Cookies

The cookies were moved from VSGI to Valum since it’s only an abstraction over request and response headers. VSGI aims to be a minimal protocol and should provide just enough abstraction for the HTTP stack.

CLI options for VSGI.Soup

This is quite of a change and brings a wide range of new possibilities with the libsoup-2.4 implementation. It pretty much exposes Soup.Server capabilities through CLI arguments.

In short, it is now possible to:

  • listen to IPv4 or IPv6 only
  • listen from a file descriptor
  • liste from all network interfaces (instead of locally) with --all
  • enable HTTPS and specify a certificate and a key
  • set the Server header with --server-header
  • prevent Request-URI from being url-decoded with --raw-paths

The implementation used to listen from all interfaces, but this is not a desired behiaviour. The --all flag will let the server listen on all interfaces.

The behiaviour for --timeout has been fixed and now relies on the presence of the flag to enable a timeout instead of a non-zero value. This brings the possibility to set a timeout of value of 0.

There is some potential work for supporting arbitrary socket, but it would require a new dependency gio-unix that would only support UNIX-like systems.

Posted on .

Seventh week update 1/06/15 to 12/06/15 in Valum

Since the 0.1.0-alpha release, I have been releasing a bugfix release every week and the APIs are continuously stabilizing.

I have released 0.1.2-alpha with a couple of bugfixes, status code handlers and support for null as a rule.

I have also released a 0.1.3-alpha that brought bugfixes and the possibility to pass a state in next.

Along with the work on the current releases, I have been working on finishing the asynchronous processing model and developed prototypes for both CGI and SCGI protocols.

Passing a state!

It is now possible to transmit a state when invoking the next continuation so that the next handler can take it into consideration for producing its response.

The HandlerCallback and NextCallback were modified in a backward-compatible way so that they would propagate the state represented by a Value?.

The main advantage of using Value? is that it can transparently hold any type from the GObject type system and primitives.

This feature becomes handy for various use cases:

  • pass a filtered stream over the response body for the next handler
  • compute and transmit in separate handlers
    • the computation handler is defined once and pass the result in next
    • different handlers for transmitting different formats (JSON, XML, HTML, plain text, …)
  • fetch data common to a set of routes
  • build a component like a session or a model from the request and pass it to the next route

This example shows how a state can be passed to conveniently split the processing in multiple middlewares and obtain a more modular application.

app.scope ("user/<int:id>", (user) => {
    // fetch the user in a generic manner
    app.all (null, (req, res, next) => {
        var user = new User.from_id (req.params["id"]);

        if (user.loaded())
            throw new ClientError.NOT_FOUND ("User with id %s does not exist.".printf (req.params["id"]));

        next (user);
    });

    // GET /user/{id}/
    app.get ("", (req, res, next, state) => {
        User user = state;
        next (user.username);
    });

    // render an arbitrary JSON object
    app.all (null, (req, res, next, state) => {
        var generator = new Json.Generator ();

        // generate compacted JSON
        generator.pretty = false;

        generator.set_root (state);

        generator.to_stream (res.body);
    });
})

It can also be used to build a component like a Session from the request cookies and pass it.

app.all (null, (req, res, next) => {
    for (var cookie in  req.cookies)
        if (cookie.name == "session")
            { next (new Session.from_id (cookie.value)); return; }
    var session = new Session ();
    // create a session cookie
    res.cookies.append (new Cookie ("session", session.id));
});

This feature will integrate very nicely with content negociation middlewares that will be incorporated in a near future. It will help solving typical case where a handler produce a data and other handlers worry about its transmission in a desired format.

app.get ("some_data", (req, res, next) => {
    next ("I am a state!");
});

app.matcher (accept ("application/json"), (req, res, next, state) => {
    // produce a json response
    res.write ("{\"message\": \"%s\"}".printf (state.get_string ()));
});

app.matcher (accept ("application/xml"), (req, res, next, state) => {
    // produce a xml response
    res.write ("<message>%s</message>".printf (state.get_string ()))
});

This new feature made it into the 0.1.3-alpha release.

Toward a minor release!

I have been delayed by a bug when I introduced the end continuation to perform request teardown and I hope I can solve it by friday. To avoid blocking the development, I have been introducing changes in the master branch and rebased the 0.2/* upon them.

The 0.2.0-alpha will introduce very important changes that will define the asynchronous processing model we want for Valum:

  • write the response status line and headers asynchronously
  • end continuation invoked in synchronous or asynchronous contexts
  • assign the bodies to filter or redirect them
  • lower-level libsoup-2.4 implementation that takes advantage of non-blocking stream operations
  • polling for FastCGI to perform non-blocking operations

Most of these changes have been implemented and will require tests to ensure their correctness.

Some design changes have pushed the development a bit forward as I think that the request teardown can be better implemented with reference counting.

Two major changes improved the processing:

  • the state of a request is not wrapped in a connection, which is typically implemented by an IOStream
  • the request and response hold the connection, so whenever both are out of scope, the connection is freed and the resources are released

Not all implementation provide an IOStream, but it can easily be implemented and used to free any resources in its destructor.

I hope this can make it in the trunk by friday, just in time for the lab meeting.

New implementation prototypes

I have been working on CGI and SCGI prototypes as these are two very important protocols for the future of VSGI.

CGI implements the basic CGI specification which is reused in protocols like FastCGI and SCGI. The great thing is that they can use inheritence to reuse behiaviours from CGI like the environment extraction to avoid code duplication.

Posted on .

Rebasing Valum in Valum

In order to keep a clean history of changes, we use a rebasing model for the development of Valum.

The development often branches when features are too prototypical to be part of the trunk, so we use branch to maintain these different states.

Some branches are making a distinct division in the development like those maintaining a specific minor release.

  • master
  • 0.1/*
  • 0.2/*

They are public and meant to be merged when the time seems appropriate.

At some point of the development, we will want to merge 0.2/* work into the master branch, so the merge is a coherent approach.

When rebasing?

However, there’s those branches that focus on a particular feature that does not consist of a release by themselves. Typically, a single developer will focus on bringing the changes, propose them in a pull request and adapt them with others reviews.

It is absolutely correct to push --force on these submissions as it is assumed that the author has authority on the matter and it would be silly for someone else to build anything from a work in progress.

If changes have to be brought, amending and rewritting history is recommended.

If changes are brought on the base branch, rebasing the pull request is also recommended to keep things clean.

The moment everyone seems satisfied with the changes, it gets merged. GitHub creates a merge commit even when fast-forwarding, but it’s okay considering that we are literally merging and it has the right semantic.

Let’s just take a typical example were we have two branches:

  • master, the base branch
  • 0.1/route-with-callback, a branch containing a feature

Initially, we got the following commit sequence:

master
master -> route-with-callback

If a hotfix is brought into master, 0.1/route-with-callback will diverge from master by one commit:

master -> hotfix
master -> route-with-callback

Rebasing is appropriate and the history will be turned into:

master -> hotfix -> route-with-callback

When the feature’s ready, the master branch can be fast-forwarded with the feature. We get that clean, linear and comprehensible history.

How do I do that?

Rebasing is still a cloudy git command and can lead to serious issues from a newcomer to the tool.

The general rule would be to strictly rebase from a non-public commit. If you rebase, chances are that the sequence of commits will not match others, so making sure that your history is not of public authority is a good starter.

git rebase -i is what I use the most. It’s the interactive mode of the rebase command and can be used to rewrite the history.

When invoked, you get the rebasing sequence and the ability to process each commit individually:

  • squash will meld two commits
  • fixup is like squash, but will discard the squashed commit message
  • reword will prompt you for editing the commit message

I often stack work in progress in my local history because I find it easier to manage than stashes. When I introduce new changes on my prototypes, I fixup the appropriate commit.

However, you can keep a cleaner work environment and branch & rebase around, it’s as appropriate. You should do what you feel the best with to keep things manageable.

Hope that’s clear enough!

Posted on and tagged with git.

0.1.1-alpha released! in Valum

Valum 0.1.1-alpha has been released, the changeset is described in the fifth week update I published yesterday.

You can read the release notes on GitHub to get a better idea of the changeset.

I am really proud of announcing that release as it bring two really nice features:

  • next continuation in the routing process
  • all and methods

These two features completely replace the need for a setup signal. It’s only a matter of time before teardown disappear with the end continuation and status handling that the 0.2.0-alpha release will bring.

I expect the framework to start stabilizing on the 0.2.* branch when the asynchronous processing model will be well defined and VSGI more solid.

Okay, I’m back to work now ;)

Posted on .

Fifth week update 18/05/15 to 29/05/15 in Valum

These past two weeks, I have bee working on a new release 0.2.0-alpha and fixed a couple of bugs in the last release.

To make thing simple, this report will cover what have been introduced in 0.1.1-alpha and 0.2.0-alpha releases separately.

The 0.2.0-alpha should be released by the fifth of june (05/05/15) and will introduce the so awaited asynchronous processing model.

As of right now

I have fixed a couple of bugs, backported minor changes from the 0.2.0-alpha and introduced minor features for the 0.1.1-alpha release.

Here’s the changelog:

e66277c Route must own a reference to the handler.
ec89bef Merge pull request #85 from valum-framework/0.1/methods
b867548 Documents all and methods for the Router.
42ecd2f Binding a callback to multiple HTTP methods.
e81d4d3 Documents how errors are handled with the next continuation.
5dd296e Example using the next continuation.
ec16ea7 Throws a ClientError.NOT_FOUND in process_routing.
fb04688 Introduces Next in Router.
79d6ef5 Support HTTPS URI scheme for FastCGI.
29ce894 Uses a synchronous request processing model for now.
e105d00 Configure option to enable threading.
5f8bf4f Request provide HTTPVersion information.
0b78178 Exposes more GObject properties for VSGI.
33e0864 Renames View splice function for stream.
3c0599c Fixes a potential async bug with Soup implementation.
91bba60 Server documentation.

It is not possible to access the HTTP version in the Request, providing useful information about available features.

I fixed the --threading option and submitted a patch to waf development that got merged in their trunk.

FastCGI implementation honors the URI scheme if it sets the HTTPS environment variable. This way, it is possible to determine if the request was secured with SSL.

I enforced a synchronous processing model for the 0.1.* branch since it’s not ready yet.

It is now possible to keep routing if we decide that a handler does not complete the user request processing. The next continuation is crafted to continue routing from any point in the route queue. It will also propagate Redirection, ClientError and ServerError up the stack.

app.get ("", (req, res, next) => {
    next ();
});

app.get ("", (req, res) => {
    res.write ("Hello world!".data);
});

It is now possible to connect a handler to multiple HTTP methods at once using all and methods functions in the router.

The Route is safer and keep a strong reference to the handler callback. This avoid a potentially undesired deallocation.

Changes for the next release

The next release 0.2.0-alpha will focus on the asynchronous processing model and VSGI specification.

In short,

  1. the server receives a user request
  2. the request is transmitted to the application with a continuation that release the request resources
  3. the application handles the pair of request and response:
    • it may invoke asynchronous processings
    • it returns as fast as possible the control to the server and avoid any synchronous blocking on I/O
    • it must invoke the end continuation when all processing have completed so that the server can release the resources
  4. the server is ready to receive a new request

The handler is purely synchronous, this is why it is not recommended to perform blocking operations in it.

app.get ("", (req, res, end) => {
    res.write ("Hello world!".data);
    res.close ();
    end ();
});

This code should be rewritten with write_async and close_async to return the control to the server as soon as possible.

app.get ("", (req, res, end) => {
    res.write_async ("Hello world".data, Priority.DEFAULT, null,
                 () => {
        res.close_async (Priority.DEFAULT, null, () => {
            end ();
        })
    });
});

Processing asynchronously has a cost, because it delegates the work in an event loop that awaits events from I/O.

The synchronous version will execute faster, but it will not scale well with multiple requests and significant blocking. The asynchronous model will outperform this easily due to a pipeline effect.

VSGI improvments

Request and Response now have a base_stream and expose a body that may filter what’s being written in the base_stream. The libsoup-2.4 implementation uses that capability to perform chunked transfer encoding.

There is no more inheritence from InputStream or OutputStream, but this can be reimplemented using FilterInputStream and FilterOutputStream.

I have implemented a ChunkedConverter to convert data into chunks of data according to RFC2126 section 3.6.1.

It can also be used to do transparent gzip compression using the ZlibCompressor.

Soup reimplementation

The initial implementation was pretty much a prototype wrapping a MessageBody with an OutputStream interface. It is however possible to steal the connection and obtain an IOStream that can be exposed.

MessageBody would usually worry about transfer encoding, but since we are working with the raw streams, some work will have to be done in order to provide that encoding capability.

In HTTP, transfer encoding determines how the message body will be transmitted to its recipient. It provides information to the client about what amount of data will be transfeered.

Two possible transfeer encoding exist:

  • use the Content-Length header, the recipient expects to receive that number of bytes
  • use chunked in Transfer-Encoding header, the recipient expects to receive a chunk size followed by the content of the chunk

TCP guarantees the order of packets and thus, the order of the received chunks.

I implemented an OutputStream capable of encoding written data according to the header of the response. It can be composed with other streams from the GIO api, which is more flexible than a MessageBody.

The response exposes two streams:

  • output_stream, the raw and protected stream
  • body, the public and safe for transporting the message body

Some implementations (CGI, FastCGI, etc…) delegate the transfer encoding responsbiility to the HTTP server.

Status handling

The setup and teardown approach have been deprecated in favour of next continuation and status handling.

Handler can be connected to status thrown during the processing of a request.

If a status handler throws a status, it will be captured by the Router. This can be used to cancel the effect of a redirection for instance.

Likewise, status handling can invoke end to end the request processing and next to delegate the work to the next status handler in the queue.

app.status (404, (req, res, end) => {
    end ();
});

app.status (302, (req, res, next) => {
:
});

Roadmap (long-term stuff)

More to come, but I have already a few ideas

  • handling of multipart/* messages issue #81
  • polling for FastCGI issue #77
  • implementation for SCGI issue #60
  • middlewares issue #51
  • reverse rule-based routes issue #45
  • get CTPL Vala bindings right with GI (GObject Introspection)
  • more converters for more common web encoding (base64, urlencoded, etc…)

FastCGI streams can benefit from polling and reimplementing them is planned. APIs would remain the same as all would happen under the hood.

Reversing rules and possibly regular expression would make URLs in web application much easier to maintain.

CTPL has a hand-written binding and it would be great to just generate them with GI.

Posted on .

Third week update! 11/05/15 to 22/05/15 in Valum

In the past two weeks, I’ve been working on the roadmap for the 0.1.0-alpha release.

gcov

gcov has been fully integrated to measure code coverage with cpp-coveralls. gcov works by injecting code during the compilation with gcc.

You can see the coverage on coveralls.io, it’s updated automatically during the CI build.

Current master branch coverage: Coverage Status

The inconvenient is that since coveralls measures coverage from C sources using valac generated C code, it is not possible to identify which regions are covered in Vala. However, it is still possible to identify these regions in the generated code.

Asynchronous handling of requests

I changed the request handling model to be fully asynchronous. VSGI.Application handler have become an async function, which means that every user request will be processed concurrently as the server can immediatly accept a new request.

Merged glib-application-integration in the trunk

The branch was sufficiently mature to be merged in the trunk. I will only work on coverage and minor improvements until I reach the second alpha release.

It brings many improvements:

  • VSGI.Server inherit from GLib.Application, providing enhancements described in the Roadmap for 0.1.0-alpha
  • setup and teardown in the Router for pre and post processing of requests
  • user documentation improvments (Sphinx + general rewrites)
  • optional features based on gio-2.0 and libsoup-2.4 versions

0.1.0-alpha released!

I have released a 0.1.0-alpha version. For more information, you can read the release notes on GitHub, download it and try it out!

Posted on and tagged with gcc.

Roadmap for 0.1.0-alpha in Valum

0.0.1 is far behind what will be introduced in 0.1.0-alpha. This release will bring new features and API improvements.

We are releasing a new alpha since the first version was a working but incomplete prototype.

Along with the changes already introduced, the release will be ready as soon as the following will be done:

  • merge complete FastCGI integration in the trunk, which include integration of GLib.Application in the server design
  • api documentation (improvments and merge of valadoc branch)
  • improve user documentation
  • more tests and a measured coverage with gcov

Integration of GLib.Application is really cool. It basically provide any written application with a GLib.MainLoop to process asynchronous tasks and signals to handle startup and shutdown events right from the Server.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
using Valum;
using VSGI.Soup;

var app    = new Router ();
var server = new Server (app);

// unique identifier for your application
app.set_application_id ("your.unique.application.id");

app.get("", (req, res) => {
    res.write ("Hello world!".data);
});

server.startup.connect (() => {
    // no request have been processed yet
    // initialize services here (eg. database, memcached, ...)
});

server.shutdown.connect (() => {
    // called after the mainloop finished
    // all requests have been processed
});

server.run ();

Moreover, application can access a DBusConnection and obtain environment data or request external services.

This sample uses the org.freedesktop.hostname DBus service to obtain information about the hosting environment. Note that you can use DBus to perform IPC between workers fairly easily in Vala.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
var connection = server.get_dbus_connection ();

app.get ("hostname", (req, res) => {
    // asynchronous dbus call
    connection.call.begin (
        "org.freedesktop.hostname",  // bus name
        "/org/freedesktop/hostname", // object path
        "org.freedesktop.hostname",  // interface
        "Hostname",
        null, // no arguments
        VariantType.STRING, // return type
        DBusCallFlags.NONE,
        -1, // timeout
        null,
        (obj, r) => {
            var hostname = connection.call.end (r);
            res.write (hostname.get_string ().data);
        });
});

GLib.Application are designed to be held and released so that it can quit automatically whenever it’s idle (with a possible timout). Gtk uses it to count the number of opened windows, we use it to measure the number of processing requests.

Past a certain timeout after the last release, the worker will terminate.

If you have a long-running operation to process asynchronously that does not involve writting the response (in which case, you are better blocking), you have to hold the application to keep it alive while it’s processing.

What next?

The next release will be more substantial:

  • middlewares
  • components (if relevant)
  • improve VSGI specification
    • more signals to handle external events
    • better documentation to guide implementations
  • new VSGI implementations (SCGI & CGI)
  • extract VSGI (if ready)

I decided to go ahead for a Mustache implementation that targets GLib and GObject. I’m still surprised that it hasn’t been done yet. It is clearly essential to bring Vala in general purpose web development. The development will be in a separate project here on GitHub and it will not block the release of the framework.

GResource API is really great and it would be truly amazing to bundle Mustache templates like we already do with CTPL.

Posted on .