Shipping Valum on Docker in Valum
It’s now official, we have a Docker container that provides the framework and let you deploy an application easily.
My personal Web development blog
It’s now official, we have a Docker container that provides the framework and let you deploy an application easily.
In the past day, I have been working on writting bindings for libmemcached so that I can use it on my project assignment.
I bound the error.h
, server.h
, server_list.h
, storage.h
, touch.h
and
quit.h
headers.
It is now possible, from Vala, to do the following operations:
I plan to write the complete binding to dig a little more the language. The hardest part still remain, but it should be done neatly.
mget
and fetch_result
I’m working on the beta release that should bring minor improvements and more definitive APIs.
The next step is a stable 0.2.0
release which should happen in the coming
weeks.
This feature was missing from the last release and solves the issue of calling
next
when performing asynchronous operations.
When an async function is called, the callback that will process its result does not execute in the routing context and, consequently, does not benefit from any form of status handling.
app.get ("", (req, res, next) => {
res.body.write_async ("Hello world!".data, () => {
next (); // if next throws anything, it's lost
});
});
What invoke
brings is the possibility to invoke a NextCallback
in the
context of any Router
, typically the current one.
app.get ("", (req, res, next) => {
res.body.write_async ("Hello world!".data, () => {
app.invoke (req, res, next);
});
});
It respects the HandlerCallback
delegate and can thus be used as a handling
middleware with the interesting property of providing an execution context for
any pair of Request
and Response
.
The following example will redirect the client as if the redirection was thrown from the API router, which might possibly handle redirection in a particular manner.
app.get ("api", (req, res) => {
// redirect old api calls
api.invoke (req, res, () => { throw new Redirection ("http://api.example.com"); })
});
As we can see, it offers the possibility of executing any NextCallback
in any
routing context that we might need and reuse behiaviours instead of
reimplementing them.
I wrote a specfile for RPM packaging so that we can distribute the framework on RPM-based distributions like Fedora and openSUSE. The idea is to eventually offer the possibility to install Valum in a Docker container to facilitate the deployment of web services and applications.
I have literally no knowledge about Debian packaging, so if you would like to give me some help on that, I would appreciate.
The past week were higly productive and I managed to release the 0.2.0-alpha
with a very nice set of features.
There are 38 commits separating v0.1.4-alpha
and v0.2.0-alpha
tags and they
contain a lot of work.
Some of these commits were introduced prior to the seventh week update, the exact date can be checked from GitHub.
In summary,
write_head
and write_head_async
head_written
property to check if the status line and headers has been
written in the responseI am working toward a stable release with that serie, the following releases will bring features around a solid core in a backward-compatible manner.
Three key points for what’s coming next:
All the features are there, but I really want to take some time to clean the APIs, especially ensuring that naming is correct to make a nice stable release.
I have also seeked feedback on Vala mailing list, so that I can get some reviews on the current code.
Eventually, the 0.2.0
will be released and marked stable. At this point, we
will have a considerable testsuite that can be used against following releases.
According to semantic versionning (the releasing model we follow), any hoftix or minor releases has to guarantee backward-compatibility and it can easily be verified by running the testsuite from the preceeding release against the new one.
Once we will have an initial stable release, it would be great to setup a hook to run the older suites in the CI.
There is a lot of work in order to make Valum complete and I might not be done by the summer. However, I can get it production-grade and usable for sure.
CGI and SCGI implementations are already working and I will integrate them in
the 0.3.0
along with some middlewares.
Middlewares are these little piece of processing that makes routing fun and they were thouroughly described in the past posts. The following features will make a really good start:
E-Tag
, Last-Modified
, …)File
or Resource
They will be gradually implemented in minor releases, but first they must be thought out as there won’t be no going-backs.
I plan to work a lot on optimizing the current code a step further by passing it in Valgrind and identify the CPU, memory and I/O bottlenecks. The improvments can be released in hotfixes.
Valum will be distributed on two popular Linux distributions at first: Ubuntu and Fedora. I personally use Fedora and it would be a really great platform for that purpose as it ships very innovative open source software.
Once distributed, it will be possible to install the software package in a container like Docker or a hosting service like Heroku and make development a pleasing process and large-scale deployment possible.
Tomorrow, I will be releasing the first version of the 0.2 serie of Valum web micro-framework.
There’s still a couple of tests to make to ensure that everything is working
perfectly and I will distribute the 0.2.0-alpha
.
This serie will target an eventually stable release after which rules will be enforced to ensure API compatibility. It will definately mark the end of the last days prototyping.
Good night y’all!
I am happy to announce the release of a 0.1.4-alpha version of Valum web
micro-framework that bring minor improvments and complete CLI options for
VSGI.Soup
.
The cookies were moved from VSGI to Valum since it’s only an abstraction over request and response headers. VSGI aims to be a minimal protocol and should provide just enough abstraction for the HTTP stack.
This is quite of a change and brings a wide range of new possibilities with the libsoup-2.4 implementation. It pretty much exposes Soup.Server capabilities through CLI arguments.
In short, it is now possible to:
--all
Server
header with --server-header
Request-URI
from being url-decoded with --raw-paths
The implementation used to listen from all interfaces, but this is not
a desired behiaviour. The --all
flag will let the server listen on all
interfaces.
The behiaviour for --timeout
has been fixed and now relies on the presence of
the flag to enable a timeout instead of a non-zero value. This brings the
possibility to set a timeout of value of 0
.
There is some potential work for supporting arbitrary socket, but it would
require a new dependency gio-unix
that would only support UNIX-like systems.
Since the 0.1.0-alpha
release, I have been releasing a bugfix release every
week and the APIs are continuously stabilizing.
I have released 0.1.2-alpha
with a couple of bugfixes, status code handlers
and support for null
as a rule.
I have also released a 0.1.3-alpha
that brought bugfixes and the possibility to pass a state in next
.
Along with the work on the current releases, I have been working on finishing the asynchronous processing model and developed prototypes for both CGI and SCGI protocols.
It is now possible to transmit a state when invoking the next
continuation so
that the next handler can take it into consideration for producing its
response.
The HandlerCallback
and NextCallback
were modified in a backward-compatible
way so that they would propagate the state represented by a Value?
.
The main advantage of using Value?
is that it can transparently hold any type
from the GObject type system and primitives.
This feature becomes handy for various use cases:
next
This example shows how a state can be passed to conveniently split the processing in multiple middlewares and obtain a more modular application.
app.scope ("user/<int:id>", (user) => {
// fetch the user in a generic manner
app.all (null, (req, res, next) => {
var user = new User.from_id (req.params["id"]);
if (user.loaded())
throw new ClientError.NOT_FOUND ("User with id %s does not exist.".printf (req.params["id"]));
next (user);
});
// GET /user/{id}/
app.get ("", (req, res, next, state) => {
User user = state;
next (user.username);
});
// render an arbitrary JSON object
app.all (null, (req, res, next, state) => {
var generator = new Json.Generator ();
// generate compacted JSON
generator.pretty = false;
generator.set_root (state);
generator.to_stream (res.body);
});
})
It can also be used to build a component like a Session
from the request
cookies and pass it.
app.all (null, (req, res, next) => {
for (var cookie in req.cookies)
if (cookie.name == "session")
{ next (new Session.from_id (cookie.value)); return; }
var session = new Session ();
// create a session cookie
res.cookies.append (new Cookie ("session", session.id));
});
This feature will integrate very nicely with content negociation middlewares that will be incorporated in a near future. It will help solving typical case where a handler produce a data and other handlers worry about its transmission in a desired format.
app.get ("some_data", (req, res, next) => {
next ("I am a state!");
});
app.matcher (accept ("application/json"), (req, res, next, state) => {
// produce a json response
res.write ("{\"message\": \"%s\"}".printf (state.get_string ()));
});
app.matcher (accept ("application/xml"), (req, res, next, state) => {
// produce a xml response
res.write ("<message>%s</message>".printf (state.get_string ()))
});
This new feature made it into the 0.1.3-alpha
release.
I have been delayed by a bug when I introduced the end
continuation to
perform request teardown and I hope I can solve it by friday. To avoid blocking
the development, I have been introducing changes in the master branch and
rebased the 0.2/*
upon them.
The 0.2.0-alpha
will introduce very important changes that will define the
asynchronous processing model we want for Valum:
Most of these changes have been implemented and will require tests to ensure their correctness.
Some design changes have pushed the development a bit forward as I think that the request teardown can be better implemented with reference counting.
Two major changes improved the processing:
IOStream
Not all implementation provide an IOStream
, but it can easily be implemented
and used to free any resources in its destructor.
I hope this can make it in the trunk by friday, just in time for the lab meeting.
I have been working on CGI and SCGI prototypes as these are two very important protocols for the future of VSGI.
CGI implements the basic CGI specification which is reused in protocols like FastCGI and SCGI. The great thing is that they can use inheritence to reuse behiaviours from CGI like the environment extraction to avoid code duplication.
In order to keep a clean history of changes, we use a rebasing model for the development of Valum.
The development often branches when features are too prototypical to be part of the trunk, so we use branch to maintain these different states.
Some branches are making a distinct division in the development like those maintaining a specific minor release.
They are public and meant to be merged when the time seems appropriate.
At some point of the development, we will want to merge 0.2/* work into the master branch, so the merge is a coherent approach.
However, there’s those branches that focus on a particular feature that does not consist of a release by themselves. Typically, a single developer will focus on bringing the changes, propose them in a pull request and adapt them with others reviews.
It is absolutely correct to push --force
on these submissions as it is
assumed that the author has authority on the matter and it would be silly for
someone else to build anything from a work in progress.
If changes have to be brought, amending and rewritting history is recommended.
If changes are brought on the base branch, rebasing the pull request is also recommended to keep things clean.
The moment everyone seems satisfied with the changes, it gets merged. GitHub creates a merge commit even when fast-forwarding, but it’s okay considering that we are literally merging and it has the right semantic.
Let’s just take a typical example were we have two branches:
master
, the base branch0.1/route-with-callback
, a branch containing a featureInitially, we got the following commit sequence:
master
master -> route-with-callback
If a hotfix is brought into master, 0.1/route-with-callback
will diverge from
master
by one commit:
master -> hotfix
master -> route-with-callback
Rebasing is appropriate and the history will be turned into:
master -> hotfix -> route-with-callback
When the feature’s ready, the master branch can be fast-forwarded with the feature. We get that clean, linear and comprehensible history.
Rebasing is still a cloudy git command and can lead to serious issues from a newcomer to the tool.
The general rule would be to strictly rebase from a non-public commit. If you rebase, chances are that the sequence of commits will not match others, so making sure that your history is not of public authority is a good starter.
git rebase -i
is what I use the most. It’s the interactive mode of the rebase
command and can be used to rewrite the history.
When invoked, you get the rebasing sequence and the ability to process each commit individually:
I often stack work in progress in my local history because I find it easier to manage than stashes. When I introduce new changes on my prototypes, I fixup the appropriate commit.
However, you can keep a cleaner work environment and branch & rebase around, it’s as appropriate. You should do what you feel the best with to keep things manageable.
Hope that’s clear enough!
Valum 0.1.1-alpha
has been released, the changeset is described in the
fifth week update I published
yesterday.
You can read the release notes on GitHub to get a better idea of the changeset.
I am really proud of announcing that release as it bring two really nice features:
next
continuation in the routing processall
and methods
These two features completely replace the need for a setup
signal. It’s only
a matter of time before teardown
disappear with the end
continuation and
status handling that the 0.2.0-alpha
release will bring.
I expect the framework to start stabilizing on the 0.2.*
branch when the
asynchronous processing model will be well defined and VSGI more solid.
Okay, I’m back to work now ;)
These past two weeks, I have bee working on a new release 0.2.0-alpha
and
fixed a couple of bugs in the last release.
To make thing simple, this report will cover what have been introduced in
0.1.1-alpha
and 0.2.0-alpha
releases separately.
The 0.2.0-alpha
should be released by the fifth of june (05/05/15) and will
introduce the so awaited asynchronous processing model.
I have fixed a couple of bugs, backported minor changes from the 0.2.0-alpha
and introduced minor features for the 0.1.1-alpha
release.
Here’s the changelog:
e66277c Route must own a reference to the handler.
ec89bef Merge pull request #85 from valum-framework/0.1/methods
b867548 Documents all and methods for the Router.
42ecd2f Binding a callback to multiple HTTP methods.
e81d4d3 Documents how errors are handled with the next continuation.
5dd296e Example using the next continuation.
ec16ea7 Throws a ClientError.NOT_FOUND in process_routing.
fb04688 Introduces Next in Router.
79d6ef5 Support HTTPS URI scheme for FastCGI.
29ce894 Uses a synchronous request processing model for now.
e105d00 Configure option to enable threading.
5f8bf4f Request provide HTTPVersion information.
0b78178 Exposes more GObject properties for VSGI.
33e0864 Renames View splice function for stream.
3c0599c Fixes a potential async bug with Soup implementation.
91bba60 Server documentation.
It is not possible to access the HTTP version in the Request
, providing useful
information about available features.
I fixed the --threading
option and submitted a patch to waf development
that got merged in their trunk.
FastCGI implementation honors the URI scheme if it sets the HTTPS
environment
variable. This way, it is possible to determine if the request was secured with
SSL.
I enforced a synchronous processing model for the 0.1.*
branch since it’s not
ready yet.
It is now possible to keep routing if we decide that a handler does not
complete the user request processing. The next
continuation is crafted to
continue routing from any point in the route queue. It will also propagate
Redirection
, ClientError
and ServerError
up the stack.
It is now possible to connect a handler to multiple HTTP methods at once using
all
and methods
functions in the router.
The Route
is safer and keep a strong reference to the handler callback. This
avoid a potentially undesired deallocation.
The next release 0.2.0-alpha
will focus on the asynchronous processing model
and VSGI specification.
In short,
end
continuation when all processing have completed
so that the server can release the resourcesThe handler is purely synchronous, this is why it is not recommended to perform blocking operations in it.
This code should be rewritten with write_async
and close_async
to return
the control to the server as soon as possible.
Processing asynchronously has a cost, because it delegates the work in an event loop that awaits events from I/O.
The synchronous version will execute faster, but it will not scale well with multiple requests and significant blocking. The asynchronous model will outperform this easily due to a pipeline effect.
Request and Response now have a base_stream
and expose a body
that may
filter what’s being written in the base_stream
. The libsoup-2.4
implementation uses that capability to perform chunked transfer encoding.
There is no more inheritence from InputStream
or OutputStream
, but this can
be reimplemented using FilterInputStream
and FilterOutputStream
.
I have implemented a ChunkedConverter
to convert data into chunks of data
according to RFC2126 section 3.6.1.
It can also be used to do transparent gzip compression using the
ZlibCompressor
.
The initial implementation was pretty much a prototype wrapping a MessageBody with an OutputStream interface. It is however possible to steal the connection and obtain an IOStream that can be exposed.
MessageBody would usually worry about transfer encoding, but since we are working with the raw streams, some work will have to be done in order to provide that encoding capability.
In HTTP, transfer encoding determines how the message body will be transmitted to its recipient. It provides information to the client about what amount of data will be transfeered.
Two possible transfeer encoding exist:
Content-Length
header, the recipient expects to receive that number of byteschunked
in Transfer-Encoding
header, the recipient expects to receive a chunk size followed by the content of the chunkTCP guarantees the order of packets and thus, the order of the received chunks.
I implemented an OutputStream
capable of encoding written data according to
the header of the response. It can be composed with other streams from the GIO
api, which is more flexible than a MessageBody
.
The response exposes two streams:
Some implementations (CGI, FastCGI, etc…) delegate the transfer encoding responsbiility to the HTTP server.
The setup
and teardown
approach have been deprecated in favour of next
continuation and status handling.
Handler can be connected to status thrown during the processing of a request.
If a status handler throws a status, it will be captured by the Router
. This
can be used to cancel the effect of a redirection for instance.
Likewise, status handling can invoke end
to end the request processing and
next
to delegate the work to the next status handler in the queue.
More to come, but I have already a few ideas
multipart/*
messages issue #81
FastCGI streams can benefit from polling and reimplementing them is planned. APIs would remain the same as all would happen under the hood.
Reversing rules and possibly regular expression would make URLs in web application much easier to maintain.
CTPL has a hand-written binding and it would be great to just generate them with GI.