So How Do You Improve Yourself?

Posted on Nov 6, 2016 in ,

I started reading Apprenticeship Patterns a few days ago. It started with the notion of mentorship, and it struck me in the weak spot, because that’s what I’ve been thinking a lot about recently.

It feels like me and people around me all get stuck on some experience level plateau. For the first several years in the industry you spend a lot of time improving your skills each and every day. When you look back a year or a month you can clearly see the path you’ve taken and all the ways you’ve improved yourself in. But another year or three pass by and you start feeling that something is not quite right. You and people around you read a lot, absorb and process relevant technical information in all possible ways, and yet you still feel the same. You work mostly the same way, you are not moving towards the mastery, you’re just doing your job (and maybe doing it well).

I’ve been trying to understand why it happens that way. One of the differences I see (and personally struggle a lot with) is a lack of mentors. When you’re young and inexperienced, there are always a lot of cool guys around, whom you perceive as the know-it-all wizards. They are always there to inspire you and guide you along the way of self-improvement. You look at them and see a path in the woods. You think "I’d like to be at least as good as this guy!"

But the years pass by, and now you are the "experienced" guy. Maybe you even have some (meaningless) job title, maybe you are leading a team and guiding newcomers now. But there is a problem — the other experienced guys around you are not so different now. You are all pretty much on the same expertise level. You don’t learn a lot from each other, apart from some rare tricks and past expertise references. And it’s damn hard to move forward now.

How do you become better every day now?

Try the new languages and approaches? You’ve probably already seen a cycle (or two) of the similar technologies raising and failing. It feels all the same. Reactive programming? Been there, done that a few years ago. Redux? Come on, back to centralized message dispatchers and visual state trees. ES7 async/await? Done it with C#5 and TPL before that (has it really been 6 years ago?) Elm? Elixir? Meh, read the basic "Introduction to functional programming" by Bird and Wadler a long time ago, not impressed anymore. Docker? Kubernetes? Integrating microservices? You see the point but you’re fighting their ugliness and don’t understand how everyone jumping on the hype bandwagon can call it easy and user-friendly. You saw what easy means. What you see now is bearable, but definitely not easy.

When you’ve been doing the software engineering for a long time now and it’s been working out quite well so far, how do you detect the gaps in your skills and your technique? You’ve got to try really hard to detach yourself from the routine context, take a step (or ten) back and try to get a better overview of what’s going on around you. It’d be extremely helpful to have a master wizard-like mentor around to tell you "look, Andrew, you can do this much easier" or "there is another way to accomplish your goal". But there is none. Nothing. Void. You’re out of the training grounds and now you’re on your own. I wonder if anyone else feels the same? Am I in the wrong section of the industry? Maybe I just don’t know the right guys and communities around where I can find the new enlightening experience?

Abstract enough

Abstract enough

Posted on Oct 6, 2015 in and tagged

Programmers tend to think there is something special about them.

However, a vast amount of IT jobs out there is mostly about business automation and solving auxiliary tasks, with client software being just a tool to help specialists do their real-world jobs—track the state of component research and development, automate status reporting, control the operating hardware mechanism. The so called "business logic" of your product is probably just a bloated mess on top of the huge state machine. If you ever put some "entities" in the database, then you are probably building a state machine, with the only varying parameter being its size and the number of moving parts.

But it seems that building "state machine"-ish entity manipulation applications (CRUD, anyone?) is kind of an already researched area with a long history. Of course every product is different, but what I’m trying to say here is that there is a high chance that you actually don’t need that many concepts and high-level abstractions to make your product happen.

What’s more likely is that you will face three types of challenges:

  • How can I make my data manipulation fit into the resource constraints, like time, storage, memory, latency, etc.?
  • How can I make the product to be easy to change and adapt to the new requirements? I’d say that the task of ensuring that newcomers can easily grab the project’s source code base and get familiar with it falls in this category as well, because if the code and architecture is easy to grasp, then there is a high chance that it will be easy to modify as well.
  • How can I achieve the highest possible correctness? I want the product to be stable and have no errors whatsoever.

The first point (resource constraints) doesn’t favor the high abstractions that much. Another level of indirection is probably going to add up some overhead on top of your existing resource consumption.

The second one also doesn’t play well. If your codebase is bursting with obscure category theory concepts or academic constructs, don’t expect the newcomers to get acquainted with your product internals fast enough. Even the functional programming, being all the hype for the last several years, is still unknown to the masses, despite what your Twitter feed tells you (remember, you choose what to read and who to follow, don’t make a mistake of assuming that everyone else out there is somehow similar).

That brings us to the latter point about correctness. Academic background keeps telling us that higher-level abstractions from category theory, type systems, language and computation research, and other fascinating CS branches, shall help us achieve the extraordinary correctness levels, but I’m not convinced yet. They are definitely helpful, to some extent, but I don’t sense any industry-level revolution out there. The basic functional(-ish) programming concepts we see finding its ways into the mainstream languages and platforms these days is a nice touch, but it hardly revolutionizes the quality of business products.

So do you actually need to care about your abstractness level? Should you study your profunctors and coalgebras to build a successful product? As with many other things, "abstract enough" is good to go. I’d say it’s important to remember that the fact that you can do something doesn’t necessarily mean you should do that. It’s always helpful to know what’s out there and what you can use when you find a specific task that your solution works best for. In the end of the day, your fellow colleague developer would likely appreciate a readable and understandable code much more than the "smart" one.


How we build Targetprocess

Posted on Jun 18, 2015 in and tagged ,


For the last 2 years my professional career have been diverging further and further away from the path it was set on early. I’ve started as a desktop application developer a long time ago, occasionally building web APIs on the server-side and doing some mobile app development, and then slowly getting into the frontend web development. Nowadays, working at Targetprocess, I think I’m spending roughly 70% of the time on the frontend (mostly general presentation logic and client-side business rules, not the crazy HTML markup/positioning/styling stuff) and 30% on server-side APIs.

This is not a marketing post, but I’d like to give you some context on the things I’ll be talking about. Targetprocess is an agile and visual project management tool [1], so you may expect it to be a fairly sophisticated on the visual interface side. Indeed, it’s a massive single page web application written mostly in JavaScript on the client-side and C# on the server.

At the same time, I see that some fellow developers still don’t believe that it’s possible to build anything beyond a simple product promotion website or a landing page with a toy language with JavaScript, which is only good for document animation. That’s a common misconception I hear quite often from the friends doing mostly server-side and mobile things, and I’d like them (and any other reader) to see how the things look from my point of view. I also think that the details on how we build a large product may be interesting for anyone in the web development business. [2]

With no further delay, let’s dive right into the technical details!

Application startup

The entry point to the client app is a usual ASP.NET web application, which handles authentication, authorization and a top-level routing. Those things should be of no surprise for anyone familiar with building web applications with ASP.NET or any similar technology, so I won’t go into details here.

The authorized user is redirected to the main ASPX file. If you’re not familiar with ASP.NET – you can think about ASPX files as HTML-like templates with a mix of C# scripts, which are executed by the server to build HTML files which are then rendered by the browser.

You can also use ASPX to generate JavaScript code inside the templated HTML files, and that’s basically what our main ASPX does. We use Require.js as a module system, and the ASPX file generates a basic config script for it. Here is a top-level overview of the main ASPX file:

<!DOCTYPE html>
<head runat="server"> <%-- server-side meta info generation --%> </head>
    <%-- A server-side control which renders JavaScript with the main require settings, like paths, shims, etc. --%>
    <tp:RequireJS runat="server" />
        // additional Require.js config if necessary
        // require.config({});
    <%-- A reference to the client-side application entry point --%>
    <script src="main.js" type="text/javascript"></script>

ASPX also references the main JavaScript file which bootstraps the entire client application.

Client application initialization

The mentioned main.js module serves as an entry point and is can be simplified down to the following code.

define(function(require) {
    var Application = require('tau/components/component.application');

    var appConfig = {
        // basic app config, nothing specific
    var app = Application.create(appConfig);

The created application instance is responsible for in-app hash routing, configuring and starting up services and rendering visual components in the proper places, which all leads us to the more interesting part of the client-side architecture.

Component model

Basically, the top-level element of the application is a "component" (to be precise, the application itself is a component as well!). All components are built around the pub/sub pattern, which means that the components fire events, which the consumers may subscribe to, and they can also react to the messages sent by the consumers or other components.

This behavior is technically implemented via event buses – the simple subscription brokers with the following interface.

var bus = {
    // Subscribe to the specific event on this bus with the given callback
    on: function(eventName, handler) {},
    // Send a message for the event of the specified name with specified arguments
    fire: function(eventName, args) {},

    // Other helper methods for subscription management
    once: function(eventName, handler) {},
    remove: function(eventName, handler) {},

Every component module exposes at least a single create method, which constructs the new event bus, builds the new component instance upon it, and returns the constructed bus to the caller. The caller may subscribe to the component’s internal events or may send the messages to control the component’s behavior.

For example, to render a component in a specified DOM element, you would send it "init" message, or "refresh" to tell it to update its internal state and re-render in-place.

Component internals

Technically, the component itself is just a simple container that doesn’t do that much. The heavy work is done by special modules called "component extensions".

An extension is just a module which is attached to the component’s event bus, which listens to its events and sends the new ones when something happens.

We use OOP-like inheritance similar to John Resig’s "class.js" implementation. Usually, the extensions extend some base class and specify the event handlers by providing fields with special names, which usually look like bus eventName.

define(function(require) {
    var BaseExtension = require('tau/core/extension.base');

    return ExtensionBase.extend({
        'bus afterRender': function(eventInfo, renderedElementInfo) {
            // Wire up DOM user events with the appropriate handlers,
            // e.g. make AJAX request when user clicks the "save data" button.

        'bus afterRender:last + updateData': function(eventInfo, renderedElementInfo, updateCommand) {
            // When the first rendering is completed,
            // handle the external "updateData" messages sent to the component,
            // for example make another AJAX request and update rendered HTML elements.

            // When the data is updated, notify everyone about it.
            // this.fire('dataUpdated', {data: ...});

The component treats such fields as event handler and parses their names to subscribe them to the corresponding events on its event bus.

To make it easier to handle various kinds of event combinations, the field names may represent the complex expressions:

  • "bus foo" – handle single event with name "foo"
  • "bus foo + bar" – execute a callback once when you get both "foo" and "bar" events
  • "bus foo > bar" – execute a callback once only when "foo" event is followed by "bar" event
  • "bus foo:last + bar" – given at least a single "foo" event, execute a callback for every "bar" event
  • and so on..

Here’s how the typical extension may look like when it’s attached to the event bus (red line on the picture):

Scheme of a simple extension with multiple event handlers

Obviously, an extension can also fire events itself to interact with other extensions or with the outside world.

Abstract reuse

One of the cool parts of that extension model is that it’s highly composable, so you can easily re-use the similar code in several components just by plugging in the required extensions.

So if you create an agile project management tool, and you’ve got several different visual components which render story cards, e.g. Kanban board, prioritization list and a roadmap timeline), and all of them should support "click the card to open it" behavior, then you only need to write a single extension which wires up click events to the "open card" action and plug it into every component, which brings us to the typical component module definition.

define(function(require) {
    var ComponentCreator = require('tau/components/component.creator');
    var ClickToOpenExtension = require('tau/extensions/extension.click.to.open');
    var SomeOtherExtension = require('./some.other.extension');
    var CardListTemplate = require('./templates/card.list');

    return {
        create: function(componentConfig) {
            var creatorConfig = {
                extensions: [
                // when template is invoked, render cards with 'i-role-card' class names
                template: CardListTemplate

            return ComponentCreator.create(creatorConfig, componentConfig);

With the extension just listening to click events on any .i-role-card element in the rendered scope.

// extension.click.to.open.js

define(function(require) {
    var BaseExtension = require('tau/core/extension.base');
    var $ = require('jQuery');

    return ExtensionBase.extend({
        'bus afterRender': function(eventInfo, rendered) {
            rendered.$element.on('click', '.i-role-card', function(e) {
                // get card ID or from click event and open the card details dialog

The Environment

While the components and extensions take the responsibility of rendering your data and handling user interactions, you’ll likely want to store the global state somewhere, e.g. information about the logged-in user. You’d probably extract services for the non-visual cross-component behavior, like an interface to the complex server API or the listener for data change notifications delivered through WebSockets.

The lack of static typing doesn’t let us encode the dependency graphs into the service types themselves, which doesn’t leave us much choice of the dependency injection techniques. We rely on a simple service locator, which knows how to build most shared services, and exists as a singleton per application instance.

This notion of a service container is built into most of the components, so a typical extension receives a reference to the service locator [3] during its initialization routine.

var Extension = BaseExtension.extend({
    'bus initialize': function(eventInfo, initConfig) {
        var serviceContainer = initConfig.context.configurator;
        // get any registered service here
        var loggedUser = serviceContainer.getLoggedUser(); 

Fix for the type addition

While they don’t change the grand picture, the JSDoc type annotations (http://usejsdoc.org/ and https://developers.google.com/closure/compiler/docs/js-for-compiler) are quite helpful when working in modern IDEs like WebStorm. The main idea is that you annotate your classes and variables with type markers, and the engine does some basic type-checking against your code, signaling about the potential issues, like calling a non-existing method or passing the invalid arguments to the function call.

We try to put annotations on the module exports and function arguments.

define(function(require) {
     * @class FooService
     * @extends Class
    return Class.extend({
         * @param {String} bar
         * @param {Number} baz
        foo: function(bar, baz) {

No doubt, the annotations look clumsy and redundant but they work great for API documentation and early type mismatch bug detection. Hopefully, one day we will migrate to TypeScript or anything similar to make it even better.


  • Component-based architecture allows us to build a fairly complex modular web application with hundreds of different views and use cases.
  • The underlying pub/sub mechanism lets us write loosely coupled reusable modules which rely on the messaging system and don’t even have to know anything about each other.
  • Even though the service locators are far from being perfect, we use them for a simple dependency injection of the global services.
  • JSDoc type annotations for documentation and basic type checking.

What’s next?

With the great power comes great responsibility, and not everything is so perfect in the land of the extensible dynamically typed message passing. In the next part I’ll talk both about great things the components allow us to do and about the issues they bring to the table.


1. The most simple example to think about is a number of Kanban-like boards with interactive information-rich user story cards grouped by teams or features. See the product website for details.

2. The server-side is quite complex and intriguing as well, but that correlates with the widespread opinions on the server components: most developers accept that your "serious" business logic lives on the server. However, we have tons of smart things in our server-side codebase, so it probably deserves its own blogpost.

3. I personally don’t like service locators—I would almost never use it in a language like C# and would prefer a proper constructor injection instead. However, most constructor injection techniques in a dynamically typed language with no type introspection like JS look quite ugly, so we have to live with the locator for now.

To keep it manageable I try to design most modules in the way that they require the specific services, and limit the presence of a service container at the top-level modules only. It makes the module dependencies and responsibilities more clear. To put it another way, it’s much better for a module to depend on Service A and Service B, than on a container with all possible services at the same time.

You're alone and nobody will help you

Long story short, I had to create a modern web app.

Actually, it was an HTML page prototype for our new feature. Basic layout and a fairly complex JavaScript to check our design and interaction ideas.

I’ve been doing a lot of web development during the last 2 years, but it was mostly about extending the existing long-running project. The opportunity to learn how to set up a completely new solution sounded too tempting too miss it.

Side note: I was going to write a rant-like blogpost initially. However, the final text turned out to be more like a guide than a bunch of complaints. So you actually may think about it as a from-zero-to-infinity guide to setting up a bare-bones JavaScript web app and its accompanying infrastructure on Windows.

Baby steps

I start with a simple index.html.

Since it’s going to be mostly about interactive interfaces, and I don’t want to spend that much time thinking about boilerplate markup and event handling, I’m going to build it with React.js.

Even though the codebase is not going to be that huge, I probably also want a proper module system. OK, we’ve been using Require.js for quite a long time now, so it looks like a reasonable choice. Just put a <script> with the reference to the main JS file into index.html.

Obviously, I don’t want to download the libraries manually, so I’m going to pick Bower for dependency management.

Ask the colleague whether I’m doing everything right, just in case. Maybe there is a better way, you know.. Well, actually, it turns out that the cool guys use webpack these days. All right, let’s check it out.

Webpack is an advanced module bundling system which takes your code modules and assets and bundles them together with 3rd party libraries, transforming them on the way (for example, from one language to another) and packing for the optimal performance. Sounds good.

But don’t rush to nail the getting start guide—the cool guys don’t do that nowadays. Why write any bootstrapping code at all when you can reuse the boilerplate crafted by the worldwide community? That’s the main idea behind scaffolding tools like Yeoman. You just choose the kind of web app you’d like to build from the fairly large gallery of templates, and the tool generates a typical code you’d have to write yourself.

Want enterprise Angular skeleton build according to the "best practice recommendations"? Sure thing! "Hipster stack for Java developers"? Why not? ASP.NET MVC? Suit yourself. You can find almost anything out there. And if it doesn’t exist, feel free to submit your own generator.

npm install everything

All right, let’s dive into bootstrapping the app itself!

First of all, I’d highly recommend to use Chocolatey as a general package management on Windows. It installs almost everything – from media apps to editors to Windows Features. It’s also a powerful tool for OS setup automation in case you need to install a lot of stuff with almost no hassle.

Go to http://chocolatey.org and install it, then restart the command prompt (duh..)

@powershell -NoProfile -ExecutionPolicy unrestricted -Command "iex ((new-object net.webclient).DownloadString('https://chocolatey.org/install.ps1'))" && SET PATH=%PATH%;%ALLUSERSPROFILE%\chocolatey\bin

Now you can use commands like clist and cinst to browse and install packages.

Most of the web tools run under node.js, so it should be your next step.

cinst nodejs
cinst npm

The mentioned yeoman comes next

npm install -g yo

Now wait until it fetches and installs dozens of packages :grin:

You’ve probably started worrying about wasting your life at this point. So it’s time to go shopping, err.. finding the Yeoman generator which suits your needs (remember that a generator is a template Yeoman uses to scaffold the project structure).

I’ve picked react-webpack:

npm install -g generator-react-webpack

Now you’re finally ready to generate some code, yay! In a project dir, run:

yo react-webpack 

I particularly enjoy that yo syntax. It sounds extremely gangsta and it probably also has --beatch flag.

OK, go grab some coffee because NPM is going to fetch even more packages this time. In the end, I had about 100mb of them for a simple web app. Not bad!

Just follow the documentation from now on. What could possibly go wrong?

grunt build

Oh, you also need to install grunt. In case you didn’t know, that’s the "JavaScript task runner". Basically, it’s a service which you can run to automate basic webdev tasks like compiling your scripts, running tests or reloading the live app on the fly whenever you change the code (yes, that’s possible and it even works. Sometimes).


npm install -g grunt-cli
grunt build

If nothing breaks at this point you should get a basic app compiled in your \dist\ directory. You won’t probably be able to load it via index.html though because this template was designed to be used with a server-based apps, and it generates broken links to assets for file: protocol.

You should try to launch the local web-server instead.

grunt serve

The webserver renders your actual app into an iframe on a page with hot-reload and basic compilation trace output.

If you’ve come that far and you’re not overly depressed with setting up the infrastructure, then congratulations on getting to the application code!

Probably now you’re ready to write your first class for your new app. Since we use React, we can utilize its JSX loader for webpack to build ES6-like classes.

class Model {
    constructor(name) {
        this._name = name;

    get greeting() {
        return "Hello, " + this._name;

Unfortunately, this code doesn’t compile (I still laugh about compiling JavaScript, but that’s another story) because JSX doesn’t support getters (at least at the moment of writing).

OK, let’s use the transpiler then!

6to5 was a go-to choice, but NPM will tell you it’s deprecated upon installation, so let’s try its successor called Babel. It has a loader for webpack (loader is a module transforming a source code for bundling).

npm install babel-loader --save-dev

Now open webpack.config.js file, locate the loaders config for JS files, and add babel transformer to the pipeline.

loaders: [{
    test: /\.js$/,
    loader: 'react-hot!jsx-loader?harmony!babel'

This syntax tells webpack to apply Babel-transform, then jsx-loader transform with harmony flag, then react-hot for the hot application reload.

grunt serve

Yay, finally up and running! Time to get back to coding models.

At some point you’d probably want to use underscore or lodash to manipulate your data structures. Good luck with that because it all breaks down quite fast.

If you reference underscore, you’d probably get runtime TypeError quite soon due from the depths of underscore’s initialization routine:

// Establish the root object, window in the browser, or exports on the server.
var root = this;
var previousUnderscore = root._;

Babel prefaces every "webpack"-ed module with 'use strict'; instruction by default, which prevents underscore from storing a global host variable. I honestly haven’t found anything better than disabling this transformation at all, so my loaders config for webpack now looks like this.

loaders: [{
    test: /\.js$/,
    loader: 'react-hot!jsx-loader?harmony!babel?blacklist=useStrict'

The final grunt serve and it all seems to work now. Phew.. Now, the several hours later I can get to the actual coding and prototyping!


At first I was quite disappointed with the amount of things you have to do to make such a small app work. The explosion of technologies and platforms makes it a little bit hard to jump-start and dive right into the development, compared to the typical desktop, mobile, web or server applications you build these days with .NET, for example.

The funny thing is that the mature front-end developers don’t actually feel any pain here. However, reflecting on that experience and writing the blogpost made me think that it’s not actually that bad, you just have to keep your mind open to grasp the entire network of small technologies and their relations to each other.

Hopefully, the front-end web development evolves even further in the coming years, the irrelevant solutions will die, and we will get several good, stable and fairly easy tools to build the reliable things in a fast and enjoyable manner.


Are you ashamed of your API?

Posted on Feb 10, 2015 in

I’ve been integrating some older modules of our codebase into a new one recently, which naturally involved finding the colleagues with the respective knowledge of those modules and asking them questions. It’s a natural workflow in our imperfect world of leaky abstractions with the lack of true black-boxes, so there is nothing to get mad about.

However, one particular thing did strike me on my journey for the knowledge and made me wonder. Most of the times my question “Hey, did you remember that %feature% thing?” was followed by the immediate “oh, damn, yeah, that thing definitely sucks” reaction. It was funny because I hadn’t even mentioned any issues with that module (and I actually hadn’t had any) but the reaction was clearly negative.

Why do some developers have an automatic negative attitude towards the things they have built? Is it because they move towards the feature delivery speed instead of the quality sometimes? Is it because of the heart-breaking tradeoffs and the legacy they had to accommodate to during the development stage?

The natural tradeoffs are nothing to be ashamed off. In the end of the day our primary goal is to create a good enough solution for a given problem considering resources available at the moment. There is nothing wrong with not being able to create the single ultimately perfect solution. However, keeping that “will you be ashamed of it in several months?” question in mind may be quite helpful the next time you build something new.


On Book Libraries

Posted on Feb 9, 2015 in and tagged

My grandparents used to have a huge home library in their old house. My parents had a smaller one, most of the books were either borrowed from their parents’ libraries or were purchased for their studies in the university. I have about 10 books at home, half of them may be considered “mine”, the other half technically belong to my wife (meaning I’ve never used them and I have no particular interest in doing so), and I guess I haven’t read more than 30 pages of at least one of “my” books.

For the centuries the home libraries were attributed with erudition, intelligence and even nobility of their owners, because if your family owned a vast amount of books, there was a chance that you’d spent some part of your life reading them and would eventually become smarter than your average common folks. Plus, if you were not struggling for your life and survival, you didn’t have that much choice of entertainment and timewasting, so reading was a reasonable choice to spend your time. And don’t forget that there was much less content to read than you have nowadays, there were no amateur bloggers like me who would throw their illogical and unprofessional thoughts about everything into the huge world-wide pile of garbage.

Things started to change dramatically in the second half of the 20th century with its rise in the life quality, the wide of adoption of television, mass media and internet, which brought us to the 21st century with cheap and easy access to the information, and the widespread of affordable e-readers and always-connected personal mobile devices. There is now much more media content in the world than you would be able to consume in your entire lifespan.

However, the human habits are not so quick to change, so we still have home libraries. You may disagree, but I think there is no point in them nowadays. Due to the enormous amount of content it’s likely that you won’t read most of the books more than once and they will keep standing on a shelf collecting dust for many years.

Public libraries were a good step in reducing the clutter at home – you’d borrow the book, finish it, and then let someone else read it. However, you can buy or download books for free so easy these days that there is no need to go the public libraries at all. And it’s not only about fiction, the web gives you access to the professional literature and research papers as well.

I personally don’t buy the point of the paper books being more superior to their electronic counterparts due to “that magic real paper feel”. Ebooks are much more convenient and versatile to use – you don’t have to carry around the additional weight, you can read them from any device with automatic progress synchronization, easy bookmarks and notes, language translations, etc. However, there are some books with awesome design and layout, which are a pure joy to look at, e.g. Type Matters by Jim Williams but I don’t think most of your books are the same.

For me, the public and home libraries in the way our ancestors knew them are dead. I enjoy our digital future even if I don’t look cool in the eyes of some elitists praising their smelling dust collectors.

License notes:
The featured image is a property of James Kirkus-Lamont

Your code sucks

Posted on Mar 17, 2014 in and tagged

So you are sitting at your desk wondering how come your beautifully- and carefully-thought abstractions have turned into an ugly monster, and why your precious codebase smells like a giant mess. You may be not the smartest guy around but you are not that stupid or unqualified after all.

Well, just accept that your code sucks and stop worrying about it. Even the most brilliant programmers I know come up with the messy code or leaky abstractions sometimes. To fail is human. The business requirements come and go, the product evolves, you are growing as a professional. Don’t be OK with it, just stop torturing yourself trying to find the 100% perfect solution. If it works for now and it looks easy enough to be changed later, then it’s probably fine. On a side note, I would rather wonder why you are OK with any code at all. If you can’t spot an issue here or there then you are probably just not skilled enough to see the defects.

I’m not talking about some ‘forget the code, WE ARE SHIPPING THE PRODUCT HERE, BEATCH!’ management bullshit. The beautiful code and design is what makes your product easy to maintain and improve in the long run. The more skilled and experienced you become, the more likely you are to fall into “disappointed in everything” mental trap. Try to think about it rationally – you’ve been around for quite some time, you’ve built some great stuff, your projects haven’t fallen apart due to awful technical decisions. And though your code sucks from your point of view, perhaps it’s not that bad on the absolute scale of code awesomeness.

What can we do make it less painful? Code review really helps a lot. Some developers complain about code review not being effective enough, i.e. it only helps you find the most basic formatting and code issues. I was a bit skeptical myself not so long time ago, but even if it helps to fix the formatting and minor code issues, than it’s a great improvement! I would say it’s a matter of trying and figuring out for yourself. From my experience, early code reviews help to avoid possible design pitfalls, for example when a developer have not thought about a specific usecase of the module he is working on. What’s even more important is knowledge sharing. Two heads holding the feature implementation details are definitely better than a single one (you never know, that lonely developer working on the sophisticated data processing logic may get hit by an asteroid the next day).

Try and fail, learn from your experience. Reading books on code quality doesn’t help. Unless you’ve suffered from the fucked up design or messed up code you won’t understand why it’s important to think about future-self supporting the codebase.

The code sucks. Repeating this statement makes no sense. What does matter is understanding why it sucks, accepting the tradeoffs and constantly thinking about the ways to improve it.

On flat design

Posted on Nov 5, 2013 in and tagged

The flat is the new black. You’ve probably noticed that the flat is the most trendy thing in the mobile and web design topic these days (however, `the true` hipster may tell you that it’s too mainstream, so he has moved to something `drastically new`). Pushing the content, promoting the simple visual choices and borrowing the inspiration from the good old Swiss design may sound so easy for a casual reader who doesn’t think too much about it. The most striking example is what the people often say about Microsoft design experiments, namely Windows Phone and Windows 8. I’ve heard these “just remove gradients, shadows, and the rounded boarders, add some tiles – and you are good to go” ramblings so many times. Sometimes it makes me sad that the users confuse an abstract idea (design principles) with its specific implementation (design patterns, solutions or even controls).

Apple has pushed it even further with iOS7. Well, not actually Apple but the journalists, writers and users. They’ve built a simple mental model, associating the word “flat” with the recent Microsoft and Apple products, effectively destroying the original principles standing behind the flat design. This random article which popped up in my feed recently is a nice example – the author tells you that flat design may not be that great, because some basic principles are poorly implemented in iOS7 and Windows 8. Well, that’s the problem with these specific products, it doesn’t mean the flat design is flawed in general!

Some Pinterest boards on mobile and web design are the good examples of how great the flat may work and how easy it is to fail with unwise solutions trying to mimic it. On the side note – try to find any devices out there besides iPhone. Damn that marketing.


Microsoft community has recently realized that blindly following the platform guidelines gets in the way of the creativity. They have declared the guidelines as some basic patterns to get you up and running, so that you can quickly build the beautiful user-friendly app even if you don’t have the proficient designers in your team. However, they don’t discourage you from implementing your own great UX ideas and going beyond the basic controls and behaviors – just try to be nice and don’t publish the crappy apps with inconsistent and irrational visual design.

Mini Recorder 2.3.0 released

Posted on Mar 6, 2013 in and tagged ,

The new 2.3.0 update brings several major improvements and fixes the known issues.

1. Fixed issues with clipping and repeating sound when recording under the locked screen.
Some phones (especially, the older ones) were affected by this severe bug, which corrupted the audio files when the phone’s screen became locked during the recording. Basically, when the user locks the screen, the Windows Phone operating system goes into the low-power mode. The documentation recommends to pause active foreground processes, stop the timers, etc. Turned out that the IO system became the performance bottleneck once the screen got locked, and the app was unable to properly process the microphone buffer data. Lowering down the auto-save operation frequency allowed to fix this issue.

2. Recording is automatically paused when application is deactivated.
If you press the search or the start button while recording, the app will automatically pause itself. And when you return to the app, the recording timer will display the correct duration.

3. Application tries to save the current recording when closing unexpectedly.
Let’s accept it – each and every app has bugs. Sometimes these bugs are not even noticeable, and sometimes they cause the unexpected errors, which crash the app. With the new 2.3.0 version, if such error occurs, Mini Recorder will try to save the currently active recording before closing.

4. User can pin the recording to the start screen.
At the recording details page tapping the “pin” icon on the application bar will create a secondary live tile, displaying the recording name and duration (and even the attached photo, if selected) on the front side and the small notes excerpt on the back side.

5. [Windows Phone 8 only] User can export the recording to the Media Library.
If you are running Mini Recorder on the Windows Phone 8 device, you will see the new “music+videos hub” option. Choosing it will copy the audio file to the /Music folder on your phone. You can access the exported files either from the built-in Music+Videos app (look for the new Mini Recorder artist) or by connecting the phone to your PC (look for /Music/Mini Recorder folder). Isn’t it cool? You don’t even need to connect your cloud storage accounts anymore.

6. User can turn off the level meter.
The fancy animated tiles may not fit everyone’s preferences. If you would like to turn them off, just head to the settings page.

7. Fixed some Skydrive uploading issues.
If your the title of your recording contained one of the reserved characters, like slash or question mark, the Skydrive refused to upload it. The recent version automatically replaces them with the underscore symbol.

You can download the recent FREE version here – http://windowsphone.com/s?appId=1a3ebb19-c12d-4dd6-8766-69b4b2af7a06

If you want to thank me and donate 2 bucks please consider purchasing the paid version – http://windowsphone.com/s?appId=356d8d24-41c5-4248-b18d-8c7b960501a8

Much more features are planned for the upcoming versions. If you have any suggestions, or would like to report an error – contact me from the app’s feedback page.

I have always been frustrated by sharing screenshots taken on my PC. Back to the old days, I had to follow the “Press the Prt Scr, open the image editor, press Ctrl+V, Ctrl+S, choose the save destination, open that directory, drag-n-drop the file to messenger/Word/whatever”. Sharing was painful.

Then there were tons of advanced screenshot capturing applications that I just couldn’t get used to. Most of them were the typical shareware trying to install all sorts of toolbars and extensions. If you were a good boy who didn’t want to bloat his system, you would probably keep away from such a piece of software. (Not trying to insult anyone here – I am pretty sure there were decent apps).

Years later, Windows 7 introduced the ‘Snipping tool’ that I instantly felt in love with. It was integrated, fast and much easier to use than anything I’ve tried before. The only thing still itching me was sharing the screenshots. I’ve been actively using Dropbox, and its sharing features, especially ‘Public folder’ allowed me to quickly save a screenshot, grab a link and send it to someone. Not bad, but not perfect yet.

Recently I’ve found out that Windows 8 gives you even better screen capture facilities. Pressing Win+PrtScr automatically takes a screenshot (you will see the screen dim for a second) and saves it to Screenshots inside your personal Pictures directory. What if I could save it directly to public Dropbox folder? That would be super-awesome! Hit a keystroke, locate the file in the folder (you don’t even have to open Explorer manually – just right-click the Dropbox icon in system tray, and open the ‘Recently changed files’ menu), right-click it and copy the public link.

Unfortunatelly, you won’t find an option to change the default screenshots folder. But registry editor is your life saver when it comes to tweaking Windows. So open up the regedit.exe and go to HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\FolderDescriptions\{b7bede81-df94-4682-a7d8-57a52620b86f}. If you see the Name key with Screenshots value then you are on the right track. Delete the ParentFolder key, and change the value of the RelativePath entry to the directory path where you want to store your screenshots (mine is “d:dropboxpublicscreenshots” without quotes).

Now you just have to restart your machine and ensure that new screenshots are automatically saved into your public Dropbox folder.