A Day in a Pile of Work

My personal Web development blog

Third week update! 11/05/15 to 22/05/15 in Valum

In the past two weeks, I’ve been working on the roadmap for the 0.1.0-alpha release.

gcov

gcov has been fully integrated to measure code coverage with cpp-coveralls. gcov works by injecting code during the compilation with gcc.

You can see the coverage on coveralls.io, it’s updated automatically during the CI build.

Current master branch coverage: Coverage Status

The inconvenient is that since coveralls measures coverage from C sources using valac generated C code, it is not possible to identify which regions are covered in Vala. However, it is still possible to identify these regions in the generated code.

Asynchronous handling of requests

I changed the request handling model to be fully asynchronous. VSGI.Application handler have become an async function, which means that every user request will be processed concurrently as the server can immediatly accept a new request.

Merged glib-application-integration in the trunk

The branch was sufficiently mature to be merged in the trunk. I will only work on coverage and minor improvements until I reach the second alpha release.

It brings many improvements:

  • VSGI.Server inherit from GLib.Application, providing enhancements described in the Roadmap for 0.1.0-alpha
  • setup and teardown in the Router for pre and post processing of requests
  • user documentation improvments (Sphinx + general rewrites)
  • optional features based on gio-2.0 and libsoup-2.4 versions

0.1.0-alpha released!

I have released a 0.1.0-alpha version. For more information, you can read the release notes on GitHub, download it and try it out!

Posted on and tagged with gcc.

Roadmap for 0.1.0-alpha in Valum

0.0.1 is far behind what will be introduced in 0.1.0-alpha. This release will bring new features and API improvements.

We are releasing a new alpha since the first version was a working but incomplete prototype.

Along with the changes already introduced, the release will be ready as soon as the following will be done:

  • merge complete FastCGI integration in the trunk, which include integration of GLib.Application in the server design
  • api documentation (improvments and merge of valadoc branch)
  • improve user documentation
  • more tests and a measured coverage with gcov

Integration of GLib.Application is really cool. It basically provide any written application with a GLib.MainLoop to process asynchronous tasks and signals to handle startup and shutdown events right from the Server.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
using Valum;
using VSGI.Soup;

var app    = new Router ();
var server = new Server (app);

// unique identifier for your application
app.set_application_id ("your.unique.application.id");

app.get("", (req, res) => {
    res.write ("Hello world!".data);
});

server.startup.connect (() => {
    // no request have been processed yet
    // initialize services here (eg. database, memcached, ...)
});

server.shutdown.connect (() => {
    // called after the mainloop finished
    // all requests have been processed
});

server.run ();

Moreover, application can access a DBusConnection and obtain environment data or request external services.

This sample uses the org.freedesktop.hostname DBus service to obtain information about the hosting environment. Note that you can use DBus to perform IPC between workers fairly easily in Vala.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
var connection = server.get_dbus_connection ();

app.get ("hostname", (req, res) => {
    // asynchronous dbus call
    connection.call.begin (
        "org.freedesktop.hostname",  // bus name
        "/org/freedesktop/hostname", // object path
        "org.freedesktop.hostname",  // interface
        "Hostname",
        null, // no arguments
        VariantType.STRING, // return type
        DBusCallFlags.NONE,
        -1, // timeout
        null,
        (obj, r) => {
            var hostname = connection.call.end (r);
            res.write (hostname.get_string ().data);
        });
});

GLib.Application are designed to be held and released so that it can quit automatically whenever it’s idle (with a possible timout). Gtk uses it to count the number of opened windows, we use it to measure the number of processing requests.

Past a certain timeout after the last release, the worker will terminate.

If you have a long-running operation to process asynchronously that does not involve writting the response (in which case, you are better blocking), you have to hold the application to keep it alive while it’s processing.

What next?

The next release will be more substantial:

  • middlewares
  • components (if relevant)
  • improve VSGI specification
    • more signals to handle external events
    • better documentation to guide implementations
  • new VSGI implementations (SCGI & CGI)
  • extract VSGI (if ready)

I decided to go ahead for a Mustache implementation that targets GLib and GObject. I’m still surprised that it hasn’t been done yet. It is clearly essential to bring Vala in general purpose web development. The development will be in a separate project here on GitHub and it will not block the release of the framework.

GResource API is really great and it would be truly amazing to bundle Mustache templates like we already do with CTPL.

Posted on .

First week update! 04/05/15 to 08/05/15 in Valum

As part of the first week, I have to produce an initial document describing what I will be working on during the semester. Once it’s written down, I will post it on this blog.

I have already a good idea of what I would like to work on:

  • finish VSGI specification
  • second alpha release & feedback from communities
  • SCGI implementation
  • mustache implementation for GLib
  • more tests and awesomeness

The first alpha release is already 4 years old and this one bring such radical changes that we’re almost starting over. Therefore, a second alpha release will permit us to tease the targeted audience and obtain recommendations to build the very best framework.

SCGI is a very simple protocol to communicate HTTP messages over streams. It will take a real advantage of the GIO stream API and I am sure this could become an efficient way to serve web application in production.

Mustache (or any templating engine) is essential if we want to bring Valum outside the web service development. I plan to provide a GLib implementation so that it can be used anywhere. CTPL will remain the default templating engine for its simplicity and convenience as it covers quite well simple UI requirements.

Testing is part of any sane software development process. I will focus on providing quality software that does not break easily.

Subsequent weeks will contain more sustained posts that will describe what have been done, so stay put!

Posted on and tagged with Vala.

Project accepted! in Valum

The project was accepted!

I will officially work on Valum this summer under the direction of François Major at the IRIC lab.

As part of the evaluation, I will be have the opportunity to demonstrate the framework potential with an assignment. Also, I am required to keep track of the project advancements, so keep in touch with the Valum category of my blog!

Posted on and tagged with valum.

Presentation of Valum at the IRIC lab team in Valum

I have presented the framework to the IRIC team. They do research in bio-informatics.

I covered the following points:

  • history of Vala
  • presentation of Valum
  • code examples
  • middleware architecture
  • framework architecture
  • justification
  • objectives
  • application

You can access the presentation PDF and the source in Markdown. It is generated by pandoc, beamer and LaTeX.

Build the template with:

pandoc -t beamer -V theme:Rochester --latex-engine xelatex -o presentation.pdf presentation.md

Posted on and tagged with vala and web.

Just discovered.. zsh!

I never thought I would find zsh actually that great. I feel like I’ve been missing a nice prompt since ages.

I would like to cover my first experience a little and show you how you can turn your default shell into a powerful development tool. In order to do that, you have to:

  1. install zsh
  2. get a real plugin manager (antigen here!)
  3. get a really nice and powerful prompt
  4. enjoy all the above!

zsh is quite easy to install using your distribution package manager:

yum install zsh

antigen can be cloned from GitHub

git clone https://github.com/zsh-users/antigen.git .antigen

Now, edit your first .zshrc initialization file!

source ~/.antigen/antigen.zsh

antigen use oh-my-zsh

antigen bundle git

antigen bundle zsh-users/zsh-completions
antigen bundle zsh-users/zsh-syntax-highlighting
antigen bundle nojhan/liquidprompt

antigen apply

Run zsh from your current shell and antigen should clone and install all the declared bundles.

liquidprompt will be installed, which you shall enjoy quite greatly.

Posted on and tagged with Linux and zsh.

Using ghdl instead of Quartus II

ghdl is a great tool to prototype hardware quickly. It can be combined with gtkwave to analyze signals.

I did hardware design last semester and this is a bit tough for my mind right now, but I think it could help others out having a hard time with Quartus II. This post explain how to replace Quartus in the process of developing the device …

First of all, you need ghdl and gtkwave installed on your workstation.

yum install ghdl gtkwave

Then you can create a sample project or clone one I did last semester.

git clone https://github.com/arteymix/ghdl-lmc.git

A project usually consist of entities and testbeds on these entities. A testbed applies entries on an entity and make assertions on outputs.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
library ieee;

use ieee.std_logic_1164.all;
use ieee.numeric_std.all;

library work;

entity testbed is
end entity;

architecture testbed of testbed is

    type inputs_t is array(0 to 255) of signed(11 downto 0);

    constant inputs: inputs_t := (
        0 => x"000",
        1 => x"123",
        2 => x"456",
        3 => x"ABC",
        others => x"000"
    );

    signal cycle: natural             := 0;
    signal clk:   std_logic           := '1';
    signal input: signed(11 downto 0) := inputs(cycle);

begin
    input <= inputs(cycle mod 256);
    lmc: entity work.lmc port map(clk, '1', input);
    process is
    begin

        while TRUE loop
            clk <= '1';
            wait for 10 ns;
            clk <= '0';
            wait for 10 ns;
            cycle <= cycle + 1;
        end loop;

    end process;
end architecture;

ghdl can generate a Makefile for a specified unit

ghdl --gen-makefile testbed

ghdl can analyze, elaborate or run a simulation. The analyze part is essential as it will generate object files for each entities. Then you can link all those into a single executable. This is automated by the make command.

make

Once you have a correct result, you may run it and capture signals

./testbed --vcd=testbed.vcd

gtkwave is a tool designed to analyze signals, specifically the generated vcd file.

gtkwave testbed.vcd

In gtkwave, you have to select the device in SST section and append the signals on your workarea. You may then zoom it and out to see the actual waves.

Example of gtkwave usage.

I really hope this will help you out! I did enjoy VHDL and I really liked learning Ada-like syntax.

Posted on and tagged with GHDL.

Doing WearHacks!

This week-end, I’ll be participating to WearHacks which occurs in my hometown Montreal. You can find out more here.

So far, I am very confident. We have 2 excellent programmers and a UI/UX guys which will be working on Unity. If everything goes as planned, we will push a web app backed with Python offering a very interesting user experience.

I have two concerns right now. I do not know much about the device, which will be the Nod ring and I am worried about scaling the computation we will have to be done. Roughly, we have to do some linear algebra and approximate value comparison.

Thing is, I want to keep stuff in Python as it will allow us to code lightning quick, which is essential when you have a 36 hours deadline.

Communicating using Bluetooth

That’s the tricky part: we don’t know the device since it’s not on market. We will have to reverse-engineer the data we need. I know it follows these OpenSpacial next-to-be standard and it seems to work in 6D (x, y, z) for acceleration and gyro, so if I can extract that data, I’m fine. I need to do this quick, if I can do it, then the rest will be a piece of cake.

I will also have the possibility to communicate with people who designed the hardware, so I guess I will have more information on Bluetooth protocol implied and general data encoding.

Scaling the computation

We will receive 3-directional data from the device which is pretty much an accelerometer. We have to figure out the trajectory, smooth the transitions and compare it with another trajectory. This is a lot of data to treat, especially since we need to have the result in real time (otherwise, we will have to redesign the product). I plan to rely on numpy to make any of these calculation possible.

I will have to normalize the acceleration based on gyroscope data. I do not want to deal with rotational acceleration.

The trajectory will be approximated using a polynom per dimension since the acceleration data is 3-dimensional. I will generate a polynom going through every points of acceleration using the polyfit function.

To compare two trajectories, we will have to calculate the integral difference between each polynoms

Communicating with the frontend

Frontend communication will be done using WebSocket through socket.io library. It creates a full-duplex communication system, which will allow us to communicate in both directions. The device will update the frontend and the frontend will send messages. The frontend uses Unity, so this library will do the trick UnitySocketIO.

Producing the UI

UI will be done using Unity 3D. I do not know much about it and I don’t need to!

I really hope we will do good at this Hackaton.

Posted on and tagged with Bluetooth and Hackaton.

Kohana Makefile

I have just released my sample Kohana Makefile. It has useful recipes for minification, testing, setting permissions and SELinux contexts.

Clone it

git clone https://gist.github.com/4a162dd185ac0e4f781e.git

Or wget it

wget https://gist.githubusercontent.com/arteymix/4a162dd185ac0e4f781e/raw/5f744b0e157a03e330e2512af852005c8c51d594/Makefile

It has a recipe for installing Kohana files like index.php and application/bootstrap.php

make install

It runs PHPUnit

make test

Or minify your resources

make minify

It is fully configurable, so if you use a different css minifier, you may edit the file like you need it.

My goal is to provide every Kohana developer with a good Makefile to automate frequent tasks when using the framework. I will eventually propose it in the sample Kohana application.

Posted on and tagged with Kohana.

RueDesJuristes

How the website actually look like.

It is a web application offering juridic services for french societies. It allow creation, modification and liquidation of these legal entities. Its website can be found here at ruedesjuristes.com.

It’s done entirely in PHP using the Kohana framework.

This was the first time I would be working with Twig. It was a really nice experience. Development was extremely fast and I would no lie saying it has never bugged me. I did unit testing with PHPUnit and Kohana Request, which is surprisingly efficient.

Just to say, Twig is a template engine produced by SensioLabs. It was originally built for the Symphony framework, but it can be combined with any of your favorite tool. Since I use the Kohana framework, you should look for this Twig module written by tommcdo.

I’ve been a little frustrated with errors handling when I had some mistakes in my Twig syntax. When you get an error in a parsing tree and your debugger print humongous structure recursively, you get out of memory quite quickly. To avoid this, you may reduce the depth of recursion in Debug::dump by overloading it.

The great thing about Kohana is its cascading file system (CFS), which allow us to override its default behiaviours.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
<?php

defined('SYSPATH') or die('No direct script access.');

class Debug extends Kohana_Debug {

    /**
     * Reducing the default $depth from 10 to 2 to avoid reaching memory limit.
     */
    public static function dump($value, $length = 128, $depth = 2) {

        return parent::dump($value, $length, $depth);
    }
}

If you work with light templates, you should be fine with the default depth. It is something to consider only if you reach the memory limit.

JSON really saved me here! The website collects an big amount of data to proceed the legal formalities. User have to submit forms with around 60 inputs. All the data are serialized once using json_encode. I used the ORM::filters feature to serialize the data on need.

Form can also be submitted in ajax. To do so, you may use Request::is_ajax and disable template rendering by setting Request::$auto_render to FALSE. I usually encode ORM_Validation_Exception errors if anything wrong happen: they are well structured and translated, so it becomes a charm to map errors to input!

1
2
3
4
5
6
7
8
<?php

if ($this->request->is_ajax()) {

    $this->auto_render = FALSE;

    $this->response->body(json_encode($errors));
}

Improvements in the mail module

The project also permitted me to upgrade my mailing module. I could consider it as a really nice piece of software. It has a lovely closure syntax:

1
2
3
4
5
6
7
<?php

Mailer::factory()
    ->content_type('text/html; charset=utf-8')
    ->subject('Hey Foo!')
    ->body(Twig::factory('some/template'))
    ->send('foo@example.com');

It is also parsing recipient list using a nice regex, so you do not have to worry sending more personal mail to your user, even if they have non-ascii username. It the worst case, it defaults to his email.

Moreover, it supports attachment, so whenever you need to append a legal document or an alternate message:

1
2
3
4
5
<?php

Mailer::factory()
    ->attachment($document->content, array('Content-Type' => $document->content_type))
    ->send($user->email);

PHPUnit and self-requesting

Kohana is HMVC, which means that you can request any of your page in the execution of any internal Request. This is extremly convenient when testing an application, since it generally ends up being about requesting an endpoint and asserting the new states of your data.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
<?php

defined('SYSPATH') or die('No direct script access.');

class HomeTest extends Unittest_TestCase {

    public function testIndex() {

        $response = Request::factory('')->execute();

        $this->assertEquals(200, $response->status());
        $this->assertTag(array('tag' => 'h1', 'content' => 'Hello world!'), $response->body());
        // ...
}

Even the mail module is fully testable using Mail_Sender_Mock. It is a nice feature that simulates a mailing driver. It speeds up considerably the testing as you don’t need to wait for Sendmail.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
<?php

defined('SYSPATH') or die('No direct script access.');

class HomeTest extends Unittest_TestCase {

    public function testMail() {

        $response = Request::factory('mail')
            ->method(Request::POST)
            ->values(array('email' => 'foo@example.com'))
            ->execute();

        $mail = array_pop(Mail_Sender_Mock::$history);

        $this->assertEquals('text/html', $mail->content_type());
        $this->assertContains('foo@example.com', $mail->to);
        $this->assertTag(array('tag' => 'h1', 'content' => 'Hello world!'), $mail->body());
        // ...
}

The website implements a payment solution based on PayPal. I did some work on a PayPal module I have written, which has become a simple external Request factory. It is much more convenient this way then how it was before, since it reuses the code from Kohana.

I also improved the IPN implementation. It was a little buggy, since I never really used it, but now it is fully working and tested!

Fixtures

Fixtures are really nicely done. I’ve overloaded Unittest_TestCase to add some on-the-fly ORM generators. For instance, if you need a user to test the login action:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
<?php

defined('SYSPATH') or die('No direct script access.');

class Unittest_TestCase extends Kohana_Unittest_TestCase {

    public function getUser() {

        return ORM::factory('User')
            ->values(array(
                'username' => uniqid(),
                'email' => uniqid() . '@ruedesjuristes.com',
                'password' => 'abcd1234'
            ))
            ->add('roles', ORM::factory('Role', array('name' => 'login')));
    }
}

Then, anytime you need a user in your tests,

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
<?php

public function testLogin() {

    $user = $this->getUser();
    $this->assertFalse(Auth::instance()->logged_in());

    $response = Request::factory()
        ->method(Request::POST)
        ->post(array(
            'username' => $user->username,
            'password' => 'abcd1234'
        ))->execute();

    $this->assertTrue(Auth::instance()->logged_in());
    $this->assertEquals($user->pk(), Auth::instance()->get_user()->pk());
}

This is much better, in my opinion, than rely on Unittest_Database_TestCase for an ORM based application.

Coverage

It is also the first time I’ve experienced test coverage and honestly, what an amazing tool. It pretty much analyze your code while tests are running and outputs statistics about code complexity and percentage of line execution. Untested code is likely not to work, so having a good coverage is really important.

This project shown me tools that made the development considerably faster and fun. Having not to debug was probably the best thing I’ve experienced so far. Also, delivering a high quality web app really changed the way I’ve been seeing the development process.

Posted on and tagged with Kohana and PayPal.