A Day in a Pile of Work

My personal Web development blog

Valadoc.org Rewrite and More! in Valum

The rewrite of valadoc.org in Vala using Valum has been completed and should be deployed eventually be elementary OS team (see pull #40). There’s a couple of interesting stuff there too:

  • experimental search API using JSON via the /search endpoint
  • GLruCache now has Vala bindings and an improved API
  • an eventual GMysql wrapper around the C client API if extracting the classes I wrote is worth it

In the meantime, you can test it at valadoc2.elementary.io and report any regression on the pull-request.

Valum 0.3 has been patched and improved while I have been working on the 0.4 feature set. There’s a work-in-progress WebSocket middleware, VSGI 1.0 and support for PyGObject planned.

If everything goes as planned, I should finish the AJP backend and maybe consider Lwan.

On the top, there’s Windows support coming, although the most difficult part is to test it. I might need some help there to setup AppVeyor CI.

I’m aware of the harsh discussions about the state of Vala and whether or not it will just end into an abysmal void. I would advocate inertia here: the current state of the language still make it an excelllent candidate for writing GNOME-related software and this is not expected to change.

Posted on .

Announcing Valum 0.3 in Valum

The first release candidate for Valum 0.3 has been launched today!

Get it, test it and be the first to find a bug! The final release will come shortly after along with various Linux distributions packages.

This post review the changes and features that have been introduced since the 0.2. There’s been a lot of work, so take a comfortable seat and brew yourself a strong coffee.

The most significant change has probably been the introduction of Meson as a build system and all the new deployment strategy it now makes possible.

If you prefer avoiding a full install, it’s not possible to use it as a subproject. These are defined as subdirectories of subprojects, which you can conveniently track using git submodules.

project('', 'c', 'vala')

glib = dependency('glib-2.0')
gobject = dependency('gobject-2.0')
gio = dependency('gio-2.0')
soup = dependency('libsoup-2.4')
vsgi = subproject('valum').get_variable('vsgi')
valum = subproject('valum').get_variable('valum')

executable('app', 'app.vala',
           dependencies: [glib, gobject, gio, soup, vsgi, valum])

Once installed, however, all that is needed is to pass --pkg=valum-0.3 to the Vala compiler.

vala --pkg=valum-0.3 app.vala

In app.vala,

using Valum;
using VSGI;

public int main (string[] args) {
    var app = new Router ();

    app.get ("/", (req, res) => {
        return res.expand_utf8 ("Hello world!");

    return Server.@new ("http", handler: app)
                 .run (args);

There’s been a lot of new features and I hope I won’t miss any!

There’s a new url_for utility in Router that comes with named route. It basically allow one to reverse URLs patterns defined with rules and raw paths.

All that is needed is to pass a name to rule, path or any method helper.

I discovered the : notation for named varidic arguments if they alternate between strings and values. This is typically used to initialize GLib.Object.

using Valum;
using VSGI;

var app = new Router ();

app.get ("/", (req, res) => {
    return "<a href=\"%s\">View profile of %s</a>".printf (
        app.url_for ("user", id: "5"), "John Doe");

app.get ("/users/<int:id>", (req, res, next, ctx) => {
    var id = ctx["id"].get_string ();
    return res.expand_utf8 ("Hello %s!".printf (id));
}, "user");

In Router, we also have:

  • asterisk to handle * URI
  • once for performing initialization
  • path for a path-based route
  • rule to replace method
  • register_type rather than a GLib.HashTable<string, Regex>

Another significant change is that the previous state stack has been replaced by a context tree with recursive key resolution. It pretty much maps string to GLib.Value in a non-destructive way.

In terms of new middlewares, you’ll be glad to see all the built-in functionnalities we now support:

  • authentication with support for the Basic scheme via authenticate
  • content negotiation via negotiate, accept and more!
  • static resource delivery from GLib.File and GLib.Resource bundles
  • basic to strip the Router responsibilities
  • subdomain
  • basepath to prefix URLs
  • cache_control to set the Cache-Control header
  • branch on raised status codes
  • perform work safely and don’t let any error leak!
  • stream events with stream_events

Now, which one to cover?

The basepath is my personal favourite, because it allow one to create prefix-agnostic routers.

var app = new Router ();
var api = new Router ();

// matches '/api/v1/'
api.get ("/", (req, res) => {
    return res.expand_utf8 ("Hello world!");

app.use (basepath ("/api/v1", api.handle));

The only missing feature is to retranslate URLs directly from the body. I think we could use some GLib.Converter here.

The negotiate middleware and it’s derivatives are really handy for declaring the available representations of a resource.

app.get ("/", accept ("text/html; text/plain", (req, res, next, ctx, ct) => {
    switch (ct) {
        case "text/html":
            return res.expand_utf8 ("");
        case "text/plain":
            return "Hello world!";
            assert_not_reached ();

There’s a lot of stuff happening in each of them so refer to the docs!

Quick review into Request and Response, we now have the following helpers:

  • lookup_query to fetch a query item and deal with its null case
  • lookup_cookie and lookup_signed_cookie to fetch a cookie
  • cookies to get cookies from a request and response
  • convert to apply a GLib.Converter
  • append to append a chunk into the response body
  • expand to write a buffer into the response body
  • expand_stream to pipe a stream
  • expand_file to pipe a file
  • end to end a response properly
  • tee to tee the response body into an additional stream

All the utilities to write the body come in _bytes and _utf8 variants. The latter properly set the content charset when appliable.

Back into Server, implementation have been modularized with GLib.Module and are now dynamically loaded. What used to be a VSGI.<server> namespace now has become simply Server.new ("<name>"). Implementations are installed in ${prefix}/${libdir}/vsgi-0.3/servers, which can be overwritten by the VSGI_SERVER_PATH environment variable.

The VSGI specification is not yet 1.0, so please, don’t write a custom server for now or if you do so, please submit it for inclusion. There’s some work-in-progress for Lwan and AJP as I speak if you have some time to spend.

Options have been moved into GLib.Object properties and a new listen API based on GLib.SocketAddress makes it more convenient than ever.

using VSGI;

var tls_cert = new TlsCertificate.from_files ("localhost.cert",
var http_server = Server.new ("http", https: true,
                                      tls_certificate: tls_cert);

http_server.set_application_callback ((req, res) => {
    return res.expand_utf8 ("Hello world!");

http_server.listen (new InetSocketAddress (new InetAddress.loopback (SocketFamily.IPV4), 3003));

new MainLoop ().run ();

The GLib.Application code has been extracted into the new VSGI.Application cushion used when calling run. It parses the CLI, set the logger and handle SIGTERM into a graceful shutdown.

Server can also fork to scale on multicore architectures. I’ve backtracked on the Worker class to deal with IPC communication, but if anyone is interested into building a nice clustering system, I would be glad to look into it.

That wraps it up, the rest can be discovered in the updated docs. The API docs should be available shortly via valadoc.org.

I manage to cover this exhaustively with abidiff, a really nice tool to diff two ELF files.

Long-term notes

Here’s some long-term notes for things I couldn’t put into this release or that I plan at a much longer term.

  • multipart streams
  • digest authentication
  • async delegates
  • epoll and kqueue with wip/pollcore
  • schedule future release with the GNOME project
  • GIR introspection and typelibs for PyGObject and Gjs

The GIR and typelibs stuff might not be suitable for Valum, but VSGI could have a bright future with Python or JavaScript bindings.

Coming releases will be much less time-consuming as there’s been a big step to make to have something actually usable. Maybe every 6 months or so.

Posted on and tagged with Vala.

What is Meson?

I have discovered Meson a couple of years back and since then use it for most of my projects written in Vala. This post is an attempt at describing the good, bad and ugly of the build system.

So, what is Meson?

  • a build system
  • portable (see Python portability)
  • a Ninja generator
  • use case oriented
  • fast
  • opiniated

What it’s not?

  • a general purpose build system
  • a Turing-complete language
  • extensible (only in Python)

It handle 80% of the cases nicely and elegantly.

Since it is use case oriented, features are introduced on need. It keeps a tight balance between conciseness, generality and features.

It mixes configure and build step so that the build essentially become one big tree. Then, the build system determine what goes into the configuration and what goes into the build.

The cognitive load is very low, which means it’s very easy to learn the basics and make actual usage of it. This is critical, because all the time spent on setting the build hardly contribute to the project goal.

The following is a basic build that check for dependencies (using pkg-config) and build an executable:

project('Meson Example', 'c', 'vala')

glib = dependency('glib-2.0')
gobject = dependency('gobject-2.0')

executable('app', 'app.vala', dependencies: [glib, gobject])

Building becomes a piece of cake:

mkdir build && cd build
meson ..

Only a few keywords are sufficient for most builds:

  • executable
  • library with shared_library and static_library
  • dependency
  • declare_dependency

Built-in benchmarks and tests, just pass the executable to either benchmark or test.

The main downside is that if what you want to do is not supported, you either have to hack things or wait until the feature gets into the build system.

The system is very opiniated. It’s both a good and bad thing. Good since you don’t need to write a lot to get most jobs done. Bad because you might hit a wall eventually.

There’s also the Python question. It requires at least 3.4. This is becoming less an problematic as old distributions progressively die out, but still can prevent you now. Here’s a few ideas to remedy this problem:

  • build a dependency-free zipball (see issue #588)
  • backport Meson to older Python version

Meson is getting better over time and so far has managed to become the best build system for Vala. This is why I highly recommend it.

Posted on and tagged with Meson and Vala.

Merged GModule branch! in Valum

Valum now support dynamically loadable server implementation with GModule!

Server are typically looked up in /usr/lib64/vsgi/servers with the libvsgi-<name>.so pattern (although this is highly system-dependent).

This works by setting the RPATH to $ORIGIN/vsgi/servers of the VSGI shared library so that it looks into that folder first.

The VSGI_SERVER_PATH environment variable can be set as well to explicitly provide a directory containing implementations.

To implement a compliant VSGI server, all you need is a server_init symbol which complies with ServerInitFunc delegate like the following:

public Type server_init (TypeModule type_module) {
    return typeof (VSGI.Custom.Server);

public class VSGI.Custom.Server : VSGI.Server {
    // ...

It has to return a type that is derived from VSGI.Server and instantiable with GLib.Object.new. The Vala compiler will automatically generate the code to register class and interfaces into the type_module parameter.

Some code from CGI has been moved into VSGI to provide uniform handling of its environment variables. If the protocol you want complies with that, just subclass (or directly use) VSGI.CGI.Request and it will perform all the required initialization.

public class VSGI.Custom.Request : VSGI.CGI.Request {
    public Request (IOStream connection, string[] environment) {
        base (connection, environment);

For more flexibility, servers can be loaded with ServerModule directly, allowing one to specify an explicit lookup directory and control when the module should be loaded or unloaded.

var cgi_module = new ServerModule (null, "cgi");

if (!cgi_module.load ()) {
    assert_not_reached ();

var server = Object.new (cgi_module.server_type);

I received very useful support from Nirbheek Chauhan and Tim-Philipp Müller for setting the necessary build configuration for that feature.

Posted on .

Content Negotiation in Valum

I recently finished and merged support for content negotiation.

The implementation is really simple: one provide a header, a string describing expecations and a callback invoked with the negotiated representation. If no expectation is met, a 406 Not Acceptable is raised.

app.get ("/", negotiate ("Accept", "text/xml; application/json",
                         (req, res, next, ctx, content_type) => {
    // produce according to 'content_type'

Content negotiation is a nice feature of the HTTP protocol allowing a client and a server to negotiate the representation (eg. content type, language, encoding) of a resource.

One very nice part allows the user agent to state a preference and the server to express quality for a given representation. This is done by specifying the q parameter and the negotiation process attempt to maximize the product of both values.

The following example express that the XML version is poor quality, which is typically the case when it’s not the source document. JSON would be favoured – implicitly q=1 – if the client does not state any particular preference.

accept ("text/xml; q=0.1, application/json", () => {


Mounted as a top-level middleware, it provide a nice way of setting a Content-Type: text/html; charset=UTF-8 header and filter out non-compliant clients.

using Tmpl;
using Valum;

var app = new Router ();

app.use (accept ("text/html", () => {
    return next ();

app.use (accept_charset ("UTF-8", () => {
    return next ();

var home = new Template.from_path ("templates/home.html");

app.get ("/", (req, res) => {
    home.expand (res.body, null);

This is another step forward a 0.3 release!

Posted on .

Fork! in Valum

Ever heard of fork?

using GLib;
using VSGI.HTTP;

var server = new Server ("", () => {
    return res.expand_utf8 ("Hello world!");

server.listen (new VariantDict ().end ());
server.fork ();

new MainLoop ().run ();

Yeah, there’s a new API for listening and forking with custom options…

The fork system call will actually copy the whole process into a new process, running the exact same program.

Although memory is not shared, file descriptors are, so you can have workers listening on common interfaces.

I notably tested the whole thing on our cluster at IRIC. It’s a 64 cores Xeon Core i7 setup.

wrk -c 1024 -t 32

With a single worker:

Running 10s test @
  32 threads and 1024 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    54.35ms   95.96ms   1.93s    98.78%
    Req/Sec   165.81    228.28     2.04k    86.08%
  41741 requests in 10.10s, 5.89MB read
  Socket errors: connect 35, read 0, write 0, timeout 13
Requests/sec:   4132.53
Transfer/sec:    597.28KB

With 63 forks (64 workers):

Running 10s test @
  32 threads and 1024 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    60.83ms  210.70ms   2.00s    93.58%
    Req/Sec     2.99k   797.97     7.44k    70.33%
  956577 requests in 10.10s, 135.02MB read
  Socket errors: connect 35, read 0, write 0, timeout 17
Requests/sec:  94720.20
Transfer/sec:     13.37MB

It’s about 1500 req/sec per worker and an speedup of a factor of 23. The latency is almost not affected.

Posted on .

Work on Memcached-GLib

The past few days, I’ve been working on a really nice libmemcached GLib wrapper.

  • main loop integration
  • fully asynchronous API
  • error handling

The whole code is available under the LGPLv3 from arteymix/libmemcached-glib.

It should reach 1.0 very quickly, only a few features are missing:

  • a couple of function wrappers
  • integration for libmemcachedutil
  • async I/O improvements

Once released, it might be interesting to build a GTK UI for Memcached upon that work. Meanwhile, it will be a very useful tool to build fast web applications with Valum.

Posted on .

Proposal for asynchronous delegates in Vala

This post describe a feature I will attempt to implement this summer.

The declaration of async delegate is simply extending a traditional delegate with the async trait.

public async delegate void AsyncDelegate (GLib.OutputStream @out);

The syntax of callback is the same. It’s not necessary to add anything since the async trait is infered from the type of the variable holding it.

AsyncDelegate d = (@out) => {
    yield @out.write_all_async ("Hello world!".data, null);

Just like regular callback, asynchronous callbacks are first-class citizen.

public async void test_async (AsyncDelegate callback,
                              OutputStream  @out) {
    yield callback (@out);

It’s also possible to pass an asynchronous function which is type-compatible with the delegate signature:

public async void hello_world_async (OutputStream @out)
    yield @out.write_all_async ("Hello world!".data);

yield test_async (hello_world_async, @out);


I still need to figure out how to handle chaining for async lambda. Here’s a few ideas:

  • refer to the callback using this (weird..)
  • introduce a callback keyword
AsyncDelegate d = (@out) => {
    Idle.add (this.callback);

AsyncDelegate d = (@out) => {
    Idle.add (callback);

How it would end-up for Valum

Most of the framework could be revamped with the async trait in ApplicationCallback, HandlerCallback and NextCallback.

app.@get ("/me", (req, res, next) => {
    if (req.lookup_signed_cookies ("session") == null) {
        return yield next (req, res);
    return yield res.extend_utf8_async ("Hello world!".data);

The semantic for the return value would simply state if the request has been handled instead of being eventually handled.

Posted on .

Basepath in Valum

I have recently introduced a basepath middleware and I thought it would be relevant to describe it further.

It’s been possible, since a while, to compose routers using subrouting. This is very important to write modular applications.

var app = new Router ();
var user = new Router ();

user.get ("/user/<int:id>", (req, res, next, ctx) => {
    var id = ctx["id"] as string;
    var user = new User.from_id (id);
    res.extend_utf8 ("Welcome %s", user.username);

app.rule ("/user", user.handle);

Now, using basepath, it’s possible to design the user router without specifying the /user prefix on rules.

This is very important, because we want to be able to design the user router as if it were the root and rebase it on need upon any prefix.

var app = new Router ();
var user = new Router ();

user.get ("/<int:id>", (req, res) => {
    res.extend_utf8 ("Welcome %s".printf (ctx["id"].get_string ()))

app.use (basepath ("/user", user.handle));

How it works

When passing through the basepath middleware, request which have a prefix-match with the basepath are stripped and forwarded.

But there’s more!

That’s not all! The middleware also handle errors that set the Location header from Success.CREATED and Redirection.* domains.

user.post ("/", (req, res) => {
    throw new Success.CREATED ("/%d", 5); // rewritten as '/user/5'

It also rewrite the Location header if it was set directly.

user.post ("/", (req, res) => {
    res.status = Soup.Status.CREATED;
    res.headers.replace ("Location", "/%d".printf (5));

Rewritting the Location header is exclusively applied on absolute paths starting with a leading slash /.

It can easily be combined with the subdomain middleware to provide a path-based fallback:

app.subdomain ("api", api.handle);
app.use (basepath ("/api/v1", api.handle));

Posted on .

Just reached 6.3k req/sec in Valum

I often profile Valum’s performance with wrk to ensure that no regression hit the stable release.

It helped me identifying a couple of mistakes n various implementations.

Anyway, I’m glad to announce that I have reached 6.3k req/sec on small payload, all relative to my very lowgrade Acer C720.

The improvements are available in the 0.2.14 release.

  • wrk with 2 threads and 256 connections running for one minute
  • Lighttpd spawning 4 SCGI instances

Build Valum with examples and run the SCGI sample:

./waf configure build --enable-examples
lighttpd -D -f examples/scgi/lighttpd.conf

Start wrk

wrk -c 256


Running 1m test @
  2 threads and 256 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    40.26ms   11.38ms 152.48ms   71.01%
    Req/Sec     3.20k   366.11     4.47k    73.67%
  381906 requests in 1.00m, 54.31MB read
Requests/sec:   6360.45
Transfer/sec:      0.90MB

There’s still a few things to get done:

  • hanging connections benchmark
  • throughput benchmark
  • logarithmic routing #144

The trunk buffers SCGI requests asynchronously, which should improve the concurrency with blocking clients.

Lighttpd is not really suited for throughput because it buffers the whole response. Sending a lot of data is problematic and use up a lot of memory.

Valum is designed with streaming in mind, so it has a very low (if not neglectable) memory trace.

I reached 6.5k req/sec, but since I could not reliably reproduce it, I prefered posting these results.

Posted on .