How we build Targetprocess

Posted on Jun 18, 2015 in and tagged ,

Preface

For the last 2 years my professional career have been diverging further and further away from the path it was set on early. I’ve started as a desktop application developer a long time ago, occasionally building web APIs on the server-side and doing some mobile app development, and then slowly getting into the frontend web development. Nowadays, working at Targetprocess, I think I’m spending roughly 70% of the time on the frontend (mostly general presentation logic and client-side business rules, not the crazy HTML markup/positioning/styling stuff) and 30% on server-side APIs.

This is not a marketing post, but I’d like to give you some context on the things I’ll be talking about. Targetprocess is an agile and visual project management tool [1], so you may expect it to be a fairly sophisticated on the visual interface side. Indeed, it’s a massive single page web application written mostly in JavaScript on the client-side and C# on the server.

At the same time, I see that some fellow developers still don’t believe that it’s possible to build anything beyond a simple product promotion website or a landing page with a toy language with JavaScript, which is only good for document animation. That’s a common misconception I hear quite often from the friends doing mostly server-side and mobile things, and I’d like them (and any other reader) to see how the things look from my point of view. I also think that the details on how we build a large product may be interesting for anyone in the web development business. [2]

With no further delay, let’s dive right into the technical details!

Application startup

The entry point to the client app is a usual ASP.NET web application, which handles authentication, authorization and a top-level routing. Those things should be of no surprise for anyone familiar with building web applications with ASP.NET or any similar technology, so I won’t go into details here.

The authorized user is redirected to the main ASPX file. If you’re not familiar with ASP.NET – you can think about ASPX files as HTML-like templates with a mix of C# scripts, which are executed by the server to build HTML files which are then rendered by the browser.

You can also use ASPX to generate JavaScript code inside the templated HTML files, and that’s basically what our main ASPX does. We use Require.js as a module system, and the ASPX file generates a basic config script for it. Here is a top-level overview of the main ASPX file:

<!DOCTYPE html>
<html>
<head runat="server"> <%-- server-side meta info generation --%> </head>
<body>
    <%-- A server-side control which renders JavaScript with the main require settings, like paths, shims, etc. --%>
    <tp:RequireJS runat="server" />
    <script>
        // additional Require.js config if necessary
        // require.config({});
    </script>
    <%-- A reference to the client-side application entry point --%>
    <script src="main.js" type="text/javascript"></script>
</body>
</html>

ASPX also references the main JavaScript file which bootstraps the entire client application.

Client application initialization

The mentioned main.js module serves as an entry point and is can be simplified down to the following code.

define(function(require) {
    var Application = require('tau/components/component.application');

    var appConfig = {
        // basic app config, nothing specific
    };
    var app = Application.create(appConfig);
    app.initialize();
});

The created application instance is responsible for in-app hash routing, configuring and starting up services and rendering visual components in the proper places, which all leads us to the more interesting part of the client-side architecture.

Component model

Basically, the top-level element of the application is a "component" (to be precise, the application itself is a component as well!). All components are built around the pub/sub pattern, which means that the components fire events, which the consumers may subscribe to, and they can also react to the messages sent by the consumers or other components.

This behavior is technically implemented via event buses – the simple subscription brokers with the following interface.

var bus = {
    // Subscribe to the specific event on this bus with the given callback
    on: function(eventName, handler) {},
    // Send a message for the event of the specified name with specified arguments
    fire: function(eventName, args) {},

    // Other helper methods for subscription management
    once: function(eventName, handler) {},
    remove: function(eventName, handler) {},
    ...
};

Every component module exposes at least a single create method, which constructs the new event bus, builds the new component instance upon it, and returns the constructed bus to the caller. The caller may subscribe to the component’s internal events or may send the messages to control the component’s behavior.

For example, to render a component in a specified DOM element, you would send it "init" message, or "refresh" to tell it to update its internal state and re-render in-place.

Component internals

Technically, the component itself is just a simple container that doesn’t do that much. The heavy work is done by special modules called "component extensions".

An extension is just a module which is attached to the component’s event bus, which listens to its events and sends the new ones when something happens.

We use OOP-like inheritance similar to John Resig’s "class.js" implementation. Usually, the extensions extend some base class and specify the event handlers by providing fields with special names, which usually look like bus eventName.

define(function(require) {
    var BaseExtension = require('tau/core/extension.base');

    return ExtensionBase.extend({
        'bus afterRender': function(eventInfo, renderedElementInfo) {
            // Wire up DOM user events with the appropriate handlers,
            // e.g. make AJAX request when user clicks the "save data" button.
        },

        'bus afterRender:last + updateData': function(eventInfo, renderedElementInfo, updateCommand) {
            // When the first rendering is completed,
            // handle the external "updateData" messages sent to the component,
            // for example make another AJAX request and update rendered HTML elements.

            // When the data is updated, notify everyone about it.
            // this.fire('dataUpdated', {data: ...});
        }
    });
});

The component treats such fields as event handler and parses their names to subscribe them to the corresponding events on its event bus.

To make it easier to handle various kinds of event combinations, the field names may represent the complex expressions:

  • "bus foo" – handle single event with name "foo"
  • "bus foo + bar" – execute a callback once when you get both "foo" and "bar" events
  • "bus foo > bar" – execute a callback once only when "foo" event is followed by "bar" event
  • "bus foo:last + bar" – given at least a single "foo" event, execute a callback for every "bar" event
  • and so on..

Here’s how the typical extension may look like when it’s attached to the event bus (red line on the picture):

Scheme of a simple extension with multiple event handlers

Obviously, an extension can also fire events itself to interact with other extensions or with the outside world.

Abstract reuse

One of the cool parts of that extension model is that it’s highly composable, so you can easily re-use the similar code in several components just by plugging in the required extensions.

So if you create an agile project management tool, and you’ve got several different visual components which render story cards, e.g. Kanban board, prioritization list and a roadmap timeline), and all of them should support "click the card to open it" behavior, then you only need to write a single extension which wires up click events to the "open card" action and plug it into every component, which brings us to the typical component module definition.

define(function(require) {
    var ComponentCreator = require('tau/components/component.creator');
    var ClickToOpenExtension = require('tau/extensions/extension.click.to.open');
    var SomeOtherExtension = require('./some.other.extension');
    var CardListTemplate = require('./templates/card.list');

    return {
        create: function(componentConfig) {
            var creatorConfig = {
                extensions: [
                    ClickToOpenExtension,
                    SomeOtherExtension
                ],
                // when template is invoked, render cards with 'i-role-card' class names
                template: CardListTemplate
            };

            return ComponentCreator.create(creatorConfig, componentConfig);
        }
    };
});

With the extension just listening to click events on any .i-role-card element in the rendered scope.

// extension.click.to.open.js

define(function(require) {
    var BaseExtension = require('tau/core/extension.base');
    var $ = require('jQuery');

    return ExtensionBase.extend({
        'bus afterRender': function(eventInfo, rendered) {
            rendered.$element.on('click', '.i-role-card', function(e) {
                // get card ID or from click event and open the card details dialog
            });
        }
    });
});

The Environment

While the components and extensions take the responsibility of rendering your data and handling user interactions, you’ll likely want to store the global state somewhere, e.g. information about the logged-in user. You’d probably extract services for the non-visual cross-component behavior, like an interface to the complex server API or the listener for data change notifications delivered through WebSockets.

The lack of static typing doesn’t let us encode the dependency graphs into the service types themselves, which doesn’t leave us much choice of the dependency injection techniques. We rely on a simple service locator, which knows how to build most shared services, and exists as a singleton per application instance.

This notion of a service container is built into most of the components, so a typical extension receives a reference to the service locator [3] during its initialization routine.

var Extension = BaseExtension.extend({
    'bus initialize': function(eventInfo, initConfig) {
        var serviceContainer = initConfig.context.configurator;
        // get any registered service here
        var loggedUser = serviceContainer.getLoggedUser(); 
    }
}); 

Fix for the type addition

While they don’t change the grand picture, the JSDoc type annotations (http://usejsdoc.org/ and https://developers.google.com/closure/compiler/docs/js-for-compiler) are quite helpful when working in modern IDEs like WebStorm. The main idea is that you annotate your classes and variables with type markers, and the engine does some basic type-checking against your code, signaling about the potential issues, like calling a non-existing method or passing the invalid arguments to the function call.

We try to put annotations on the module exports and function arguments.

define(function(require) {
    /**
     * @class FooService
     * @extends Class
     */
    return Class.extend({
        /**
         * @param {String} bar
         * @param {Number} baz
         */
        foo: function(bar, baz) {
        }
    });
});

No doubt, the annotations look clumsy and redundant but they work great for API documentation and early type mismatch bug detection. Hopefully, one day we will migrate to TypeScript or anything similar to make it even better.

Conclusions

  • Component-based architecture allows us to build a fairly complex modular web application with hundreds of different views and use cases.
  • The underlying pub/sub mechanism lets us write loosely coupled reusable modules which rely on the messaging system and don’t even have to know anything about each other.
  • Even though the service locators are far from being perfect, we use them for a simple dependency injection of the global services.
  • JSDoc type annotations for documentation and basic type checking.

What’s next?

With the great power comes great responsibility, and not everything is so perfect in the land of the extensible dynamically typed message passing. In the next part I’ll talk both about great things the components allow us to do and about the issues they bring to the table.


Notes

1. The most simple example to think about is a number of Kanban-like boards with interactive information-rich user story cards grouped by teams or features. See the product website for details.

2. The server-side is quite complex and intriguing as well, but that correlates with the widespread opinions on the server components: most developers accept that your "serious" business logic lives on the server. However, we have tons of smart things in our server-side codebase, so it probably deserves its own blogpost.

3. I personally don’t like service locators—I would almost never use it in a language like C# and would prefer a proper constructor injection instead. However, most constructor injection techniques in a dynamically typed language with no type introspection like JS look quite ugly, so we have to live with the locator for now.

To keep it manageable I try to design most modules in the way that they require the specific services, and limit the presence of a service container at the top-level modules only. It makes the module dependencies and responsibilities more clear. To put it another way, it’s much better for a module to depend on Service A and Service B, than on a container with all possible services at the same time.

1 Comment

Leave a Reply

Your email address will not be published. Required fields are marked *