In synchronous code, try/catch/finally
provides a simple and familiar, yet very powerful idiom for performing a task, handling errors, and then always ensuring we can clean up afterward.
Here’s a simple try/catch/finally
example in the same vein as the original getTheResult()
from Part 1:
1 2 3 4 5 6 7 8 9 10 11 12 |
|
As we’ve seen, attempting to simulate even the try/catch
via a callback-based approach is fraught with pitfalls. Adding the notion of finally
, that is, guaranteed cleanup, only makes things worse.
Using Promises, we can build an approach that is analogous to this familiar try/catch/finally
idiom, without deep callback structures.
Let’s start with a simpler version of example above that only uses try/catch
, and see how we can use Promises to handle errors in the same way.
1 2 3 4 5 6 7 8 9 10 |
|
And now, as in Part 2, let’s assume that thisMightFail()
is asynchronous and returns a Promise. We can use then()
to simulate catch
:
1 2 3 4 5 6 7 8 9 10 11 12 |
|
Waitaminit, that’s even less code than using try/catch
! What’s going on here?
This example introduces two very important facts about how Promises behave. The first of which is:
If no onFulfilled
handler is provided to then()
, the fulfillment value will propagate through unchanged to the returned Promise.
We’re not supplying an onFulfilled
handler when calling then()
. This means that a successful result from thisMightFail()
simply will propagate through and be returned to the caller.
The other important behavior is:
A handler may produce either a successful result by returning a value, or an error by throwing or returning a rejected promise.
We are supplying an onRejected
handler: recoverFromFailure
. That means that any error produced by thisMightFail
will be provided to recoverFromFailure
. Just like the catch
statement in the synchronous example, recoverFromFailure
can handle the error and return
a successful result, or it can produce an error by throwing or by returning a rejected Promise.
Now we have a fully asynchronous construct that behaves like its synchronous analog, and is just as easy to write.
Hmmm, but what about that null
we’re passing as the first param? Why should we have to type null
everywhere we want to use this asynchronous try/catch
-like construct? Can’t we do better?
While the primary interface to a Promises/A+ Promise is its then()
method, many implementations add convenience methods, built, with very little code, upon then()
. For example, when.js Promises provide an otherwise()
method that allows us to write this example more intuitive and compactly:
1 2 3 4 5 6 7 |
|
Now we have something that reads nicely!
Let’s add finally
back into the mix, and see how we can use Promises to achieve the same result for asynchronous operations.
1 2 3 4 5 6 7 8 9 10 11 12 |
|
First, let’s note that there are some very interesting things about this seemingly simple finally
block. It:
thisMightFail
and/or recoverFromFailure
thisMightFail
, or to the thrown exception (e
), or to the value returned by recoverFromFailure
1.recoverFromFailure
back into a successful result2.thisMightFail
or recoverFromFailure
) into a failure if alwaysCleanup
throws an exception.recoverFromFailure
. That is, if both recoverFromFailure
and alwaysCleanup
throw exceptions, the one thrown by alwaysCleanup
will propagate to the caller, and the one thrown by recoverFromFailure
will not.This seems fairly sophisticated. Let’s return to our asynchronous getTheResult
and look at how we can achieve these same properties using Promises.
First, let’s use then()
to ensure that alwaysCleanup
will execute in all cases (for succinctness, we’ll keep when.js’s otherwise
):
1 2 3 4 5 6 7 |
|
That seems simple enough! Now, alwaysCleanup
will be executed in all cases:
thisMightFail
succeeds,thisMightFail
fails and recoverFromFailure
succeeds, orthisMightFail
and recoverFromFailure
both fail.But wait, while we’ve ensured that alwaysCleanup
will always execute, we’ve violated two of the other properties: alwaysCleanup
will receive the successful result or the error, so has access to either/both, and it can transform an error into a successful result by returning successfully.
We can introduce a wrapper to prevent passing the result or error to alwaysCleanup
:
1 2 3 4 5 6 7 8 9 10 11 12 |
|
Now we’ve achieved one of the two properties we had lost: alwaysCleanup
no longer has access to the result or error. Unfortunately, we had to add some code that feels unnecessary. Let’s keep exploring, though, to see if we can achieve the remaining property.
While alwaysCleanupWrapper
prevents alwaysCleanup
from accessing the result or error, it still allows alwaysCleanup
to turn an error condition into a successful result. For example, if recoverFromFailure
produces an error, it will be passed to alwaysCleanupWrapper
, which will then call alwaysCleanup
. If alwaysCleanup
returns successfully, the result will be propagated to the caller, thus squelching the previous error.
That doesn’t align with how our synchronous finally
clause behaves, so let’s refactor:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
|
In both the success and failure cases, we’ve preserved the outcome: alwaysCleanupOnSuccess
will execute alwaysCleanup
but not allow it to change the ultimate result, and alwaysCleanupOnFailure
will also execute alwaysCleanup
and always rethrow the original error, thus propagating it even if alwaysCleanup
returns successfully.
Looking at the refactor above, we can also see that the remaining two properties hold:
In alwaysCleanupOnSuccess
, if alwaysCleanup
throws, the return result
will never be reached, and this new error will be propagated to the caller, thus turning a successful result into a failure.
In alwaysCleanupOnFailure
, if alwaysCleanup
throws, the throw error
will never be reached, and the error thrown by alwaysCleanup
will be propagated to the caller, thus substituting a new error.
With this latest refactor, we’ve created an asynchronous construct that behaves like its familiar, synchronous try/catch/finally
analog.
Some Promise implementations provide an abstraction for the finally
-like behavior we want. For example, when.js Promises provide an ensure()
method that has all of the properties we achieved above, but also allows us to be more succinct:
1 2 3 4 5 6 7 8 |
|
We started with the goal of finding a way to model the useful and familiar synchronous try/catch/finally
behavior for asynchronous operations. Here’s the simple, synchronous code we started with:
1 2 3 4 5 6 7 8 9 10 11 12 |
|
And here is the asynchronous analog we ended up with something that is just as compact, and easily readable:
1 2 3 4 5 6 7 8 |
|
Another common construct is try/finally
. It is useful in executing cleanup code, but always allowing exceptions to propagate in the case where there is no immediate recovery path. For example:
1 2 3 4 5 6 7 8 9 10 |
|
Now that we’ve modeled a full try/catch/finally
using Promises, modeling try/finally
is trivial. Similarly to simply cutting out the catch
above, we can cut out the otherwise()
in our Promise version:
1 2 3 4 5 6 7 |
|
All of the constraints we’ve been attempting to achieve still hold—this asynchronous construct will behave analogously to its synchronous try/finally
counterpart.
Let’s compare how we would use the synchronous and asynchronous versions of getTheResult
. Assume we have the following two pre-existing functions for showing results and errors. For simplicity, let’s also assume that showResult
might fail, but that showError
will not fail.
1 2 3 4 5 |
|
First, the synchronous version, which we might use like this:
1 2 3 4 5 6 |
|
It’s quite simple, as we’d expect. If we get the result successfully, then we show it. If getting the result fails (by throwing an exception), we show the error.
It’s also important to note that if showResult
fails, we will show an error. This is an important hallmark of synchronous exceptions. We’ve written single catch
clause that will handle errors from either getTheResult
or showResult
. The error propagation is automatic, and required no additional effort on our part.
Now, let’s look at how we’d use the asynchronous version to accomplish the same goals:
1 2 3 |
|
The functionality here is analogous, and one could argue that visually, this is even simpler than the synchronous version. We get the result, or rather in this case, a Promise for the result, and when the actual result materializes (remember, this is all asynchronous!), we show it. If getting the result fails (by rejecting resultPromise), we show the error.
Because Promises propagate errors similarly to exceptions, if showResult
fails, we will also show an error. So, the automatic the behavior here is also parallel to the synchronous version: We’ve written single otherwise
call that will handle errors from either getTheResult
or showResult
.
Another important thing to notice is that we are able to use the same showResult
and showError
functions as in the synchronous version. We don’t need artificial callback-specific function signatures to work with promises—just the same functions we’d write anyway.
We’ve refactored our getTheResult
code to use Promises to eumlate try/catch/finally
, and also the calling code to use the returned Promise to handle all the same error cases we would handle in the synchronous version. Let’s look at the complete Promise-based asynchronous version of our code:
1 2 3 |
|
1 2 3 4 5 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
|
Of course, there will always be differences between synchronous and asynchronous execution, but by using Promises, we can narrow the divide. The synchronous and Promise-based versions we’ve constructed not only look very similar, they behave similarly. They have similar invariants. We can reason about them in similar ways. We can even refactor and test them in similar ways.
Providing familiar and predictable error handling patterns and composable call-and-return semantics are two powerful aspects of Promises, but they are also only the beginning. Promises are a building block on which fully asynchronous analogs of many other familiar features can be built easily: higher order functions like map
and reduce
/fold
, parallel and sequential task execution, and much more.
You might be wondering why we want this property. For this article, we’re choosing to try to model finally
as closely as possible. The intention of synchronous finally
is to cause side effects, such as closing a file or database connection, and not to transform the result or error by applying a function to it. Also, passing something that might be a result or might be an error to alwaysCleanup
can be a source of hazards without also telling alwaysCleanup
what kind of thing it is receiving. The fact that finally
doesn’t have a “parameter”, like catch
means that the burden is on the developer to grant access to the result or error, usually by storing it in a local variable before execution enters the finally
. That approach will work for these promise-based approaches as well.↩
Note that finally
is allowed to squelch exceptions by explicitly returning a value. However, in this case, we are not returning anything explicitly. I’ve never seen a realistic and useful case for squelching an exception that way.↩
As a quick review, have a look back at the code we started with, the messy end result when using callbacks, and the things we’d like to fix in order to get back to sanity:
A Promise (aka Future, Delayed value, Deferred value) represents a value that is not yet available because the computation that will produce the value has not yet completed. A Promise is a placeholder into which the successful result or reason for failure will eventually materialize.
Promises also provide a simple API (see note below) for being notified when the result has materialized, or when a failure has occured.
Promises are not a new concept, and have been implemented in many languages. While several implementations of the Promise concept in Javascript have been around for a while, they have started to gain more popularity recently as we start to build bigger, more complex systems that require coordinating more asynchronous tasks.
(NOTE: Although there are several proposed Promise API standards, Promises/A has been implemented in several major frameworks, and appears to be becoming the defacto standard. In any case, the basic concepts are the same: 1) Promises act as a placeholder for a result or error, 2) they provide a way to be notified when the actual result has materialized, or when a failure has occurred.)
In the case of an XHR Get, the value we care about is the content of the url we’re fetching. We know that XHR is an asynchonous operation, and that the value won’t be available immediately. That fits the definition of a Promise perfectly.
Imagine that we have an XHR library that immediately returns a Promise, as a placeholder for the content, instead of requiring us to pass in a callback. We could rewrite our asynchronous thisMightFail
function from Part 1 to look like this:
(Note that several popular Javascript libraries, including Dojo (see also this great article on Dojo’s Deferred by @bryanforbes) and jQuery, implement XHR operations using promises)
Now, we can return the Promise placeholder as if it were the real result, and our asynchronous thisMightFail
function looks very much like a plain old synchronous, call-and-return operation.
In a non-callback world, results and errors flow back up the call stack. This is expected and familiar. In a callback-based world, as we’ve seen, results and errors no longer follow that familiar model, and instead, callbacks must flow down, deeper into the stack.
By using Promises, we can restore the familiar call-and-return programming model, and remove the callbacks.
To see how this works, let’s start with a simplified version of the synchronous getTheResult
function from Part 1, without try/catch so that exceptions will always propagate up the call stack.
Now let’s introduce the asynchronous thisMightFail
from above that uses our Promise-based XHR lib.
Using Promises, getTheResult()
is identical in the synchronous and asynchronous cases! And in both, the successful result or the failure will propagate up the stack to the caller.
Notice also that there are no callbacks or errbacks (or alwaysbacks!) being passed down the callstack, and they haven’t polluted any of our function signatures. By using Promises, our functions now look and act like the familiar, synchronous, call-and-return model.
We’ve used Promises to refactor our simplified getTheResult
function, and fix two of the problems we identified in Part 1. We’ve:
But, what does this mean for callers of getTheResult
? Remember that we’re returning a Promise, and eventually, either the successful result (the result of the XHR) or an error will materialize into the Promise placeholder, at which point the caller will want to take some action.
As mentioned above, Promises provide an API for being notified when either the result or failure becomes available. For example, in the proposed Promises/A spec, a Promise has a .then()
method, and many promise libraries provide a when()
function that achieves the same goal.
First, let’s look at what the calling code might look like when using the callback-based approach:
Now, let’s look at how the caller can use the Promise that getTheResult
returns using the Promises/A .then()
API.
Or, more compactly:
(Image from The Meta Picture)
Wasn’t the whole point of this Promises stuff to avoid using callbacks? And here we are using them?!?
In Javascript, Promises are implemented using callbacks because there is no language-level construct for dealing with asynchrony. Callbacks are a necessary implementation detail of Promises. If Javascript provided, or possibly when it does provide in the future, other language constructs, promises could be implemented differently.
However, there are several important advantages in using Promises over the deep callback passing model from Part 1.
First, our function signatures are sane. We have removed the need to add callback and errback parameters to every function signature from the caller down to the XHR lib, and only the caller who is ultimately interested in the result needs to mess with callbacks.
Second, the Promise API standardizes callback passing. Libraries all tend to place callbacks and errbacks at different positions in function signatures. Some don’t even accept an errback. Most don’t accept an alwaysback (i.e. “finally”). We can rely on the Promise API instead of many potentially different library APIs.
Third, a Promise makes a set of guarantees about how and when callbacks and errbacks will be called, and how return values and exceptions thrown by callbacks will be handled. In a non-Promise world, the multitude of callback-supporting libraries and their many function signatures also means a multitude of different behaviors:
… and so on …
So, while one way to think of Promises is as a standard API to callback registration, they also provide standard, predictable behavior for how and when a callback will be called, exception handling, etc.
Now that we’ve restored call-and-return and removed callbacks from our function signatures, we need a way to handle failures. Ideally, we’d like to use try/catch/finally, or at least something that looks and acts just like it and works in the face of asynchrony.
In Part 3, we’ll put the final piece of the puzzle into place, and see how to model try/catch/finally using Promises.
]]>Exceptions and try/catch are an intuitive way to execute operations that may fail. They allow us to recover from the failure, or to let the failure propagate up the call stack to a caller by either not catching the exception, or explicitly re-throwing it.
Here’s a simple example:
In this case, getTheResult
handles the case where thisMightFail
does indeed fail and throws an Error
by catching the Error
and calling recoverFromFailure
(which could return some default result, for example). This works because thisMightFail
is synchronous.
What if thisMightFail
is asynchronous? For example, it may perform an asynchronous XHR to fetch the result data:
Now it’s impossible to use try/catch, and we have to supply a callback and errback to handle the success and failure cases. That’s pretty common in Javascript applications, so no big deal, right? But wait, now getTheResult
also has to change:
At the very least, callback
(and possibly errback
, read on) must now be added to every function signature all the way back up to the caller who is ultimately interested in the result.
If recoverFromFailure
is also asynchronous, we have to add yet another level of callback nesting:
This also raises the question of what to do if recoverFromFailure
itself fails. When using synchronous try/catch, recoverFromFailure
could simply throw an Error
and it would propagate up to the code that called getTheResult
. To handle an asynchronous failure, we have to introduce another errback
, resulting in both callback
and errback
infiltrating every function signature from recoverFromFailure
all the way up to a caller who must ultimately supply them.
It may also mean that we have to check to see if callback and errback were actually provided, and if they might throw exceptions:
The code has gone from a simple try/catch to deeply nested callbacks, with callback
and errback
in every function signature, plus additional logic to check whether it’s safe to call them, and, ironically, two try/catch blocks to ensure that recoverFromFailure
can indeed recover from a failure.
Imagine if we were also to introduce finally
into the mix—things would need to become even more complex. There are essentially two options, neither of which is as simple and elegant as the language-provided finally
clause. We could: 1) add an alwaysback
callback to all function signatures, with the accompanying checks to ensure it is safely callable, or 2) always write our callback/errback to handle errors internally, and be sure to invoke alwaysback
in all cases.
Using callbacks for asynchronous programming changes the basic programming model, creating the following situation:
We can do better. There is another model for asynchronous programming in Javascript that more closely resembles standard call-and-return, follows a model more like try/catch/finally, and doesn’t force us to add two callback parameters to a large number of functions.
Next, we’ll look at Promises, and how they help to bring asynchronous programming back to a model that is simpler and more familiar.
]]>My slides are up on slideshare, and if you were at the meetup, I’d really appreciate your taking a minute to rate my talk, and (especially!) leave a comment about how it could be improved.
If you’re interested in getting more background on OOCSS, I encourage you to check out Nicole’s slides and video. I’ve also written a few articles that dig deeper into some of the concepts in last night’s talk, such as OOCSS inheritance, why .css()
is bad, and OOCSS Antipatterns:
You can also check out the digital clock demo, including these other versions:
For more deep diving along with some excellent code examples and comment discussion, I also recommend reading John Hann’s post on OOCSS for Engineers.
At the end of the presentation, I mentioned that I’m working on OOCSS design patterns for web applications. My plan is to post more information soon about the ones mentioned in the slides (with example code), as well as others, so stay tuned.
Thanks to Chris Bannon and Wijmo for organizing and sponsoring, to HackPittsburgh for lending us their space, and to everyone who attended! I had a blast, and I’m already looking forward to the next meetup.
]]>Thanks to Chris Bannon from Wijmo for organizing the meetup, and for inviting me to speak. Hope to see you there!
]]>Here you go: OOCSS Slideshows.
You might be saying, “Good grief, all that CSS is overkill!”, and be tempted to think that just using image.style.display = 'none'
is good enough. Try implementing those 3 effects in plain Javascript … you probably wouldn’t get it right—I know I probably wouldn’t.
Or maybe you’re thinking that you’d just use jQuery, Dojo, or your favorite effects library to do it. Putting aside the fact that the original interview question stipulated no libraries, the advantage of an OOCSS approach, as I’ve written about at length before, is separation of concerns, and all its related benefits.
Using an MVC and OOCSS approach puts control of the slideshow transitions into the hands of the designer, and changes to the transitions require zero Javascript changes.
]]>dojo.query
chains, since I tend to break them into multiple lines when chaining more than 1 or 2 NodeList
function calls (and I’m betting most folks do that, too). While thinking about how to handle this, I came to the conclusion that there were basically three ways I could approach it:
I decided to go with #3. In fact, this is my general rule of thumb for the completion bundle in the short and medium term. I want to make something that is both useful, and something I can actually release since I’m only working on it a few hours each week. An unreleased, vaporware completion bundle is far less cool than a released one that works 80% of the time.
Here are a few shots of the multiline statement parser in action, completing a dojo.query
chain. Notice also, in the last shot, it guesses correctly that I am starting a new statement, even though I didn’t end the previous one with a semicolon. However, it’s not perfect, and will almost certainly break under less common conditions. I’ve tried, though, to abstract the parser from the completion code in a way that allows the parser to be improved when I find cases where it breaks.
]]>
Last week, I started thinking about what features I’d want in the TextMate Dojo completion bundle in order to feel good about calling it releasable. I made a list, and started knocking them down. Here are a few that I managed to tackle over the past week:
dojo.require()
s in the current file and adds their symbols to the completion context.dojo.require()
and completes dojo package names instead of functions and properties.dojo.query
chain and completes NodeList functions while you’re in the chain.dojo.require
and dojo.query
, it will complete functions from NodeList mixins, such as dojo.NodeList-fx
, that you’ve dojo.require
d in the current file.In a word: Javascript. I’ll be posting more about this later, once the first release is ready.
Any completion framework has to be fast. I’m ruthless when it comes to development tools, and I’ll bet you are too. If something slows me down, I stop using it. If the pain outweighs the benefits, it’s gone. So, one of my goals is to make sure this bundle is as fast as it can be.
The current performance is very good on my laptop. The completion popup is nearly instantaneous, even with dojo.require
scanning, and hundreds of potential completions. That said, I’m using the most recent MacBook Pro rev with 8g RAM and an SSD. I realize that may not be the most common setup, so I will be testing it with other setups to make sure it doesn’t suck.
A few folks have asked when they can get the bundle. Since it’s a side project, I don’t want to give a date. I’ll release an initial version when I feel it’s at the point that I’d actually use it, and I can tell you that it’s getting close :)
For now, here are a few more teasers that show the new stuff in action.
It turns out, though, that TextMate provides some builtin help for rolling your own completions, and a lot of folks have done that for various languages and platforms. I decided to take a crack at it for Dojo as a part of a new, simplified Dojo bundle I’ve been working on. Here are a couple teaser screenshots of what I have so far.
I’m not ready to post the bundle yet, but I’m actively working on it, so I hope to have an initial version ready within the next couple of weeks.
The very next day, I thought of a way to do it, so here’s an analog theme (source on github) for the digital clock done entirely in CSS.
First, note a couple things:
The key is the use (in fact, abuse, read on!) of the immediate sibling selector to rotate the clock hands to the correct position, using CSS3 transforms. For example, the line above says when the first hour digit is a “0”, and the second hour digit is also a “1” rotate the second digit (remember, it’s been styled to look like an analog clock hand) by 30°. Each hour represents 30° (360 / 12 = 30), so looking at the rest of the hour selectors, you can see how it works.
Similarly, the minutes and seconds are rotated using the immediate sibling selector. The only difference is the degrees, since 360 / 60 = 6° per minute/second.
Ok, neat, it works, but …
At this point, you’re probably saying to yourself (and if you’re not, you should be) “This is a terrible way to build an analog clock”. You are absolutely right. There are many problems with this, and I’d even go so far as to call the whole thing an antipattern.
From the HTML element hierarchy, to the CSS classnames, to the time computations, most everything tailored to represent an LED-based digital clock.
However, the point of this little exercise was not to find the best way to engineer an analog clock. The point was to answer the question “could it be done?”. But maybe we can learn something about OOCSS anyway.
As the saying goes, “when all you have is a hammer, everything looks like a nail”, and in this case of trying to use only OOCSS to transform the digital clock into analog, there’s certainly some overly-ambitious hammering going on.
If the HTML is poorly-suited to the task at hand, trying to apply OOCSS on top of it will probably just make things worse. In fact, in this example, the HTML and CSS are fighting each other rather than working together.
OOCSS is not just about CSS, it’s about identifying objects in the view first (as John said in our presentation, “ignore the HTML!”), and then structuring HTML and CSS around the containers, content, identity, and states of those objects.
The fact is that the objects in this clock are LED digits, not analog clock hands, and some mildly clever CSS doesn’t change that. Conversely, this bastardization means that the hands of the analog clock have the classes “display-area” and “digit”, as well as “d0” – “d9”, none of which seem like logical choices for the hands of an analog clock!
Antipattern: In any reasonably complex system, writing HTML and CSS first is an antipattern.
What to do?: Break out the wireframes, gather around the whiteboard, and start identifying objects! List their states. Talk about ways to translate them into well-structured HTML containers and content.
One of the results of this object/HTML/CSS mismatch is state explosion, aka combinatorial explosion. There are 72 CSS rules needed just to rotate the clock hands, whereas there are only 10 rules for the original digital clock LED digits. That certainly qualifies as state explosion, and just looking at the analog rules should give you an uneasy feeling that something is wrong.
In fact, modeling any analog clock, not just this bad example of an analog clock, as discrete OOCSS states seems wrong. Consider also the progress bar example John gave during our talk. Progress bars, in most cases, represent a continuous function rather than a discrete function, and therefore can require an infinite number of states to model their possible values—e.g. 30%, 30.1%, 30.15%, and so on.
Antipattern: Trying to model continuous values with OOCSS state is an antipattern.
What to do?: Use a mechanism better suited to continuous values/functions, such as a vector library, or yes, even direct (but well abstracted!) style manipulation.
One other thing that should be bothering you about this analog clock is that nearly half the HTML elements are permanently hidden by the analog theme’s CSS. This can be an indication that you’ve misidentified the view objects. In this case, I think it’s pretty obvious that the objects I originally identified when creating the digital clock, that is, active digits composed of lit or unlit LED bars, simply are not present in the analog clock.
Antipattern: Having sections of permanently hidden HTML is an antipattern.
What to do?: Review your objects and wireframes. You may have misidentified some objects, or your application may have changed significantly enough over time that the objects you had originally, and correctly, identified are no longer present. Either way, it’s time to revisit the wireframes and refactor.
Trying to shoehorn an analog display into the digital clock was fun, but more importantly, I think it helped to identify some OOCSS antipatterns. Hopefully these will help us all avoid some pitfalls!
]]>One of the first ideas I had was to try to use fonts instead of the LED divs to show the clock digits. So, here you go.
It turned out to be fairly easy, and required zero changes to the Javascript view controller. The view controller simply passes the same messages to the view as it did before. In other words, the view controller relies on a message-passing-based view API to which the OOCSS responds. That API is unchanged in this version. To verify, you can hop on over to Daniel’s binary mod, which references my JS directly, and see that it still works.
It also required only superficial changes to the HTML:
The key changes were, of course, in the CSS, and here are some of the most relevant bits.
The comments pretty much say it all, but basically it hides the LEDs and then uses :before content to inject the font-based digits.
One thing I hadn’t thought about before I actually ran it the first time, was that the “1” digit is much thinner than all the others, so I had to forcibly set a specific width for it (in both hours/minutes and the smaller seconds). Without that specialization, the clock looks too sparse when there is a “1” (or several “1”s) being displayed. Yet another win for the OOCSS base and specialization pattern.
I think this theme looks pretty good (although I’m still partial to the LEDs), but if you have suggestions for how to tweak it, I’d love to hear them. Also, if you have an idea for how to push the envelope, feel free to leave a comment, or tweet it up!
Stay tuned for more envelope pushing …
]]>If you want to read even more about OOCSS, you can also check out a few of our blog posts on the subject:
One of my favorite things about the conference was, as Paul Elliot (@p_elliott) put it, the hallway track. I got to meet and talk with so many cool people between the sessions. John and I both had quite a few people tell us (in addition to a few good pirate jokes) that they had been thinking and doing some of the things from our talk, and they were glad to hear that other people were thinking many of the same things as validation that these techniques work. Hearing that was good validation for us as well.
I really appreciate all the feedback so far on the presentation, and on the digital clock as well, on which I have received many compliments. Thanks, everyone! If you haven’t already rated our talk, John and I both would really appreciate your feedback over at SpeakerRate.
I also have to say that overall, the conference showed how much energy there is in not only the jQuery community and the larger Javascript community, but also in front-end engineering as a whole. It’s a great time to be a front-end engineer.
Thanks to the jQuery team and the conference organizers for putting together and running an excellent conference. I had a blast. Nice job all around.
]]>Along came Daniel Lamb, who put that to the test, and created his spiffy binary clock mod.
He created a slightly modified View structure for the binary clock display in his HTML, and then cleverly applied OOCSS principles to inherit from my original OOCSS. Most interestingly, though, notice that he didn’t change a single line of code in the JS View Controller. In fact, he referenced my JS View Controller directly in his script tag. Because CSS classes are used as a message passing mechanism, and his View responds to the same messages, the binary clock works perfectly.
I’d like to thank Daniel for creating such a cool mod and perfectly illustrating the powerful separation of concerns that OOCSS and MVC can provide.
]]>I have more I’d like to add, so jump over to his post, cujo.js — OOJS, OOCSS, and OOHTML — Part 1 (OOCSS for Engineers) and come back. He specifically touches on direct style manipulation in the sections OOCSS State and OOCSS decreases risk.
The major points he makes are that direct style manipulation:
These are great points, and I agree with them. I’d like to expand on #s 1 and 2 a bit, and talk about a few other reasons in the context of the “OO” in OOCSS.
Object-oriented means “with a focus on objects”. It is a way of thinking about a problem and how to structure potential solutions. There are programming languages, such as Java, C++, C#, and even Javascript, that provide features to make it easier to apply and enforce OO principles, but a good developer can apply these principles in any language.
Some of the fundamentals of OO are abstraction, inheritance, polymorphism, and encapsulation, among others. Yeah, they’re all 30 point scrabble words, but more importantly, they are time-tested software engineering principles.
If OO means “with a focus on objects”, it seems logical to say OOCSS means, “with a focus on CSS objects”, and I believe there is a huge amount of value to be had in thinking of HTML and CSS as defining View Objects. I’d love to write about how various OO principles apply to OOCSS, and maybe someday soon I will, but for now, I’d like to look at how direct style manipulation violates two of them in particular: inheritance and encapsulation.
I previously wrote about the power of ancestor specializations and state changes affecting changes in descendants, and John goes into even more detail about it. Part of the reason it is so powerful is that it works in harmony with the “C” in CSS, the Cascade.
Using direct style manipulation logic essentially moves specialization and state inheritance from the CSS cascade to procedural Javascript. I think this is bad for two reasons:
Duplicating the cascade moves the mechanism from the browser’s super-fast style engine, to the fairly-fast-but-way-slower-than-the-browser Javascript VM. Don’t think there’s a difference? Check out scripty2’s comparison of hardware accelerated CSS3 transitions vs. Javascript-driven animation
Procedural code is less-easily checked by IDE’s, and is more risky to change than declarative CSS rules. I’d also argue that a well-organized set of CSS rules will visually communicate the cascade, and thus specialization order, more quickly and clearly than procedural branching. To draw an analogy with another popular OO language, which is faster and easier to get right: declaring a Java subclass via “class MySubclass extends MySuperclass” and letting the compiler do the heavy lifting, or writing the Java code that generates the bytecode for MySubclass?
Encapsulation is the principle of bundling the state with operations that retrieve and modify that state, as well as the idea that only those operations should access the state directly. To put it in terms of objects, an object is responsible for maintaining and controlling access to its own state by exposing only those operations which other actors are allowed to perform on its state. The other actors in the system must send messages (via exposed operations) to an object to request state changes. The object itself elects how to affect the state change, or even whether to affect it at all.
Without this access control and message passing, it would be much easier to “reach in” and alter the internal state of an object, potentially corrupting it if you don’t understand all the intricacies of its invariants. With access control, an object is protected against corruption, and your application is protected against a corrupted object wreaking havoc.
Most Object Oriented programming languages have built-in mechanisms for declaring access control and enforcing encapsulation, e.g. public, private, protected, and default or package-level access in Java. Even in Javascript, which is a much more malleable, you can use closures to achieve private encapsulation—different mechanism, similar effect.
There is no encapsulation mechanism built into HTML or CSS. The only mechanism that exists is engineering diligence.
The objects defined by OOCSS are View Objects. The HTML node ancestor/descendant relationships, in conjunction with the OOCSS specializations and state, define these objects. Their “state”, in the OO/encapsulation sense, is their style. For example, consider an HTML/OOCSS View Object that is a stylized button. It’s encapsulated state may contain height, width, background-color, background-image, background-position (maybe using CSS sprites for button states), borders margins, padding, etc. These were probably carefully crafted by a CSS designer to produce a button that looks great and has interesting and useful visual cues on hover, when pressed, etc.
Javascript View Controllers contain logic about when View Objects should change state, and to what states they should change. When View Controllers use direct style manipulation, they are “reaching in” and directly changing the encapsulated state of View Objects, potentially corrupting their presentation state by breaking the layout and presentation invariants setup by the designer.
With enough time and care, a JS engineer could certainly duplicate the invariants across some or all possible presentation states, such as, in the case of a button, idle, :hover, and :active, but then would also have to account for other axes of change, such as browser differences (e.g. box model, rgba or hsl colors, opacity, transitions, etc.). The conditionals in the Javascript would start to add up, and probably produce horribly unmaintainable code. John showed in his simple example and the text that follows, how the branching could get to O(n^m) complexity.
Doing so would also spread the presentation details out into at least 2 places, the CSS and the Javascript (or more if the presentation is being modified in several places in the JS!). Each time the presentation needed to be modified, it would require looking in both places, and would probably require involving both the CSS designer and the Javascript engineer.
I’ll also point out again that this kind of conditional logic is essentially duplicating the cascade, which is a bad idea for the reasons I listed above.
CSS has a powerful inheritance mechanism in the cascade, and its declarative style, IMHO, provides a simple and expressive way to setup presentation across many view states. It is, in that regard, a declarative language for presentation state machines. The “OO” in OOCSS is a powerful way of thinking about HTML + CSS as View Objects, and gives designers and developers the right tools to declare and manage them.
]]>My dad and I stopped by Dirty Harry’s in Verona today because I
noticed they had a Trek belt-drive bike in the window a few weeks ago,
and I wanted to try it out. Unfortunately, it was already sold and
gone. So we just looked around for a bit, and noticed they had some
really cool cruisers.
]]>
So, I created this digital clock app as a simple example of some of the concepts I have been applying to build apps with OOCSS and MVC.
Very shortly after that, I got a walkthrough of John (unscriptable) Hann’s ambitious cujo.js project, and I was blown away by two things. First, he and I had basically come to believe many of the same things about applying OOCSS and MVC, and second, he had actually wrapped those things up in an incredibly simple and intuitive API inside cujo.js.
Let’s get down to business, and look at a few of these techniques in practice. I’ll hope that some theory will fall out of it as I write.
Here’s a bit of HTML from the clock.
Some of the classes here set up “is a” relationships. Even though the order of classes in HTML doesn’t matter, I’ve arranged them left to right from general to specific, because I think that makes the most sense. The node “is a” slot—an admittedly awful name for an area in which the clock will display something. Also, it “is a” digit, which in this case, is a specialization of slot that will display a digit.
Farther down, there is another specialization of digit:
This node is still a digit, but more specifically, it’s a second, as opposed to a minute or hour. Looking at the rendered clock, you can see that seconds are smaller than minutes and hours—the presentation of seconds has been specialized to be smaller.
If you watch the DOM while the clock is running you’ll notice the digit nodes getting the classes d0 through d9. Obviously, this is driven by Javascript. That Javascript is acting as a View Controller. The digit is the View, and the Javascript driving, or controlling, it is the View Controller.
The classes d0 – d9 represent the possible states that the digit may be in. By changing the class, the View Controller is telling the View to change state. The View still “is a” digit, but it has changed state, for example, from a zero to a one. I guess you could say it “was a” zero and now it “is a” one, and that it’s actually mutating to another specialization rather than changing state. I think that’s a reasonable way to think about it—it’s just not how my brain works, so for me, it’s state.
The digits and the state transitions manifest themselves in the browser with the help of CSS. Here’s a bit of the CSS.
This CSS describes what a digit looks like when it’s in state d0 and d1. In addition to describing the resulting state’s presentation, it is also describing the state transition itself—that is, how (in the visual sense) the digit moves from one state to another using CSS3 transitions.
One thing that is subtle, but I feel is extremely important here, is that by changing the state of the digit, the state of the elements within the digit (i.e. the glowing bars) is being affected. There are no direct state changes to the bars, yet they are changing. The current state of each bar is defined by the hierarchy of classes above it in the DOM plus its own classes. Or, to put it in more general terms:
The current “whole state” (borrowing a term from a recent conversation with John) of a particular View is defined by the specializations and state of its ancestors plus its own specializations and state.
There’s no direct manipulation of nodes at the leaf level via Javascript. I’ll talk about why I believe doing direct style manipulation, such as $.css
or dojo.style
, is not a good engineering practice in another post, but, the key here is that by simply issuing a state transition on an enclosing View, state changes can be affected on its sub-Views.
Let’s look at why that’s interesting and useful on a practical level.
So, the clock has a Javascript View Controller telling the View to change state which consequently alters the state of its sub-Views, HTML which is defining the structure of that View, and CSS that is describing the presentation of the states and the transitions between them. IMHO, that’s a very powerful separation of concerns.
Imagine you wanted to change the look of the digits by giving the bars beveled ends as some digital displays have, or you wanted to make the entire clock larger or small, or size it using percentages instead of pixels, or introducing a radical new presentation and color theme. You would not need to touch the View Controller. There are several reasons that is a good thing, IMHO, two of which are:
This separation of concerns provides similar benefits on the View Controller side. When I decided to add support for 24 hour display, all I had to do (ignoring adding the new View components for selecting 12 or 24 hour time, and storing the preference) was to make a small change to the hour computation in the View Controller Javascript, issue slightly different state changes for the hour digits, and ensure that the AM or PM elements are always in the off state.
I didn’t need to make any changes to the CSS or HTML. Engineers can craft the JS, and the designers can craft the CSS. Sure, you might play both roles, but that’s not the case with every team, especially in large, complex apps, with many Views, company wide design standards, branding, and a small army of awesome designers complimented by an equally awesome army of software engineers.
I am very excited about building apps using these techniques, and I am especially excited after seeing John’s work so far on cujo.js. If it turns out to embody these concepts like I think it will, it’s gonna be a very powerful platform. I’ll certainly be keeping an eye on it.
]]>I decided to give the CSS3 digital clock a new home, and in the process, couldn’t help but hack on it a bit more. It has a setting for 12 or 24 hour time, and uses localStorage to remember both your 12/24 and color settings, so you can keep the time you want, in the color you want, whenever you want. Despite all of that, the HTML, CSS, and JS are a bit lighter and more streamlined now as well.
Oh, and it looks great as an OS X dashboard widget. You can use Safari’s web clip feature to snag it and put it on your dashboard!
]]>