The 5 minute message

In discussion with someone I respect we were talking about how to project a positive image to those higher up the management chain. He suggested that you have a 5 minute and 20 second pitch on a variety of topics.

Its very hard as a technologist to describe something without dropping into the internals, because not to-do so would create an ambiguous description. Its very hard to know what NOT to say to ensure that you continue to project an accurate image of what you are trying to describe. I have been thinking about this recently and have decided that when presenting upwards I should be less interested in what something is and more focused on why something is. Its pretty easy to describe a benefit in terms of a delivery, which requires less detail of the delivery. This refocusing works for the project I am working on and will help (I think) in later discussions.

I intend to experiment more with this way of thinking and try and create 5 minute and 20 second versions of other things that are important to me. It takes time to craft this version but the construction creates insight into the 'thing'. I hope as I get better this will create a method of thinking that will be useful.

TrailWalker 2009

Well I finished, in a total time of 27 hours 40 mins. Which is frankly pretty fine with me.

For those who have not heard, last weekend I took part in the trailwalker which is a 100km (62 mile) walk across the south downs. You do it non stop (hence the 27 hours). I think I actually walked for 23 and bit of that.

Its hellish hard work, mostly after 3-4 stages you are in a fair chunk of pain and are pretty knackered (thee are 10 stages in all). So its mostly stubbornness that makes you finish (clearly i have that in spades).

I did the walk with Rob Downs, Bharat Patel, and Peter Spindly (colleges from work) we finsihed together pretty pleased with ourselves. We completed the last 2 stages in (for us) record time, averaging 3.5 miles an hour. We averaged 2.9 miles an hour over the whole course. The race to the finish at the last stage was mostly because we were so tired it was pretty much the only way to get home!

This is the second time I have done this walk (maybe the last, ask me in a week). It does get easier. This time I was much happier overall, I only really struggled on one stage, which was the last bit of the night stage, and I think that was sleep depravation more than anything else. Having said that I am _NEVER_ good going uphill (as I clearly carry more baggage than others), I did think I would not make devils dyke a few times but make it I did. Last time I took just over 30 hours, mostly because we took too long over rest stops. This time we were pretty 'strict' about keeping going.

Benjohn and Ant were our support crew and they were bloody marvelous, they did everything we could have asked and more. They did wonders for picking us up at each stage and prepairing us for the pain to come. Particular thanks for the kinder egg and party poppers - VERY COOL.

We were fortunate to walk with Steve and his group for the first 3/4 of the walk, that made it all the more enjoyable -- gratz to steve and his guys for finishing.

The value of pair programming

So pair programming is the bastard step child of agile, in many 'pragmatic' conversations pair programming is dismissed by the smart folks. Is that fair?

I must prefix this post with saying i have never developed product using pair programming, so this comes from du brain not from du experience.

So as they say in shampoo comercials here is the *coff* psuedo *coff* science part.

FIrst lets get right out there and say that pair programming hits code efficiency, just in terms of laying down code - lets call that cost X. Now we have that out the way lets park it, (we shall return).

So are there any advantages? Now that we have parked the cost this bit is pretty easy.

o 2 people looking at code ARE going to have less defects, isnt that obvious (you are going to see that argument a lot as I think a lot of what makes pair programming is obvious).
o If you implement peer review then it must be more efficient to do that at the source, auto - review whats not to love!
o A slightly more complex thought - I would assert that there are two development mind sets the micro architecture and the macro. Using the micro-mind you are winding out loops, you are thinking about API and so forth. Using the macro mind you are considering the large scale impact of a change. Maybe its me, but I frequently development some code and am very proud but then have to burn it when I try to integrate it into the rest of the code base because I have missed the point (macro architecture). How is is not obvious that 2 mind sets is not best solved with two minds.
o .. OK bored now .. I am sure that if you disregard the cost you could think of lots of other advantages.

So lets return to the cost, given we have established the value Y is substantial (not saying its large just that its substantial). So can we get a view on the size of X.

Lets consider a different view of pair programming. 2 programmers and 2 computers, but you have a baton that must be past between the developers - you can only type when you hold the baton. What sort of efficiency would you (as a developer) would you lose under such an arrangement? I don't think I would lose all that much, clearly I would lose some but *meh* not so much. Is this view of pair programming so different to the conventional view? I would assert its not - the limiting factor for pair programming is the keyboard - the keyboard is the baton. Your loss (2 programmers focusing on the same code base) is only there to realize the advantages above.

Summary: The only way that X is large is when the keyboard is a limiting factor in coding, I don't know about you folks but my brain is a far bigger road block :-) therefore adding 2 brains to a keyboard actually make more effective use of a keyboard. I am not saying that X is zero, I am trying to show that X is modest, its a small number, it is in fact lower than the advantages - therefore its net positive!


How do I manage a bulging inbox

Within my main inbox I have 3 sub folers: Do, Done, Defer and 3 macros to move the selected folder into the relevent folder. (alt-1 moves the selected mail to done and then selects the next mail). Each day I quickly filter my inbox into the three folders. I can process several hundred mails in 20 mins or so.

Do, something I expect to process today. Done, something I don't care about/have read and understood, I dont expect to refer back to these today. Defer, something I will do something about but not today.

At the end of processing I expect my inbox to be empty.  

At the end of the day all mail items in Do move to Defer (I dont want to keep things in Done overnight).

At the start of the day all items in Defer are filtered using the rules above, I dont want to leave things in Defer for more then a day or 2. If stuff hangs around for too long I will add it to my diary to process later.

At the end of the day all mail in Done is copied into an archive folder based on the month/year. Done is just a parking place for things to be archived.

I use a tool to index my archive, I actually use X1 but google desktop is an excellent alternative.

I filter out any important facts i would like to refer back to in outlook notes.

I filter out any tasks I would like to recal into omni focus ( the best GTD I have found.

I DO NOT EVER use my inbox as a todo list or a mechanism for recording subtle facts I want to recall later. I know a lot of people do but IMHO its just a bad way to be.

Continuous Integration

Automating Software Quality

Why discuss it?
“There is a big difference between saying, 'eat an apple a day' and actually eating an apple every day”

No assertions of tooling are made but some recommendations or examples are used where appropriate.

What is continuous integration?

Simple build software every time code is changed, the reality of this statement is somewhat more complex. Like many other concepts CI is the bringing together of a series of common sense practices to produce something that works well. What about the integration bit Continuous is obvious, if builds are occurring every time code is checked in but what about the integration bit?

So lets start off with the contentious bit!:

An integration is a merge, this occurs every time two developers work on the same source tree but don't accept/see the changes of the other.
Common practice is to work in isolation on 'task' branches, this gives a FALSE sense of security. During development (and testing) there is a constant baseline for a developer to work against so work continues at a pace. Divergent changes will be shown up during integration, errors are DELAYED not removed, the later in the software process errors are found the more costly they are to resolve. Martin Fowler asserts that the time to merge changes (and so the cost of the merge) raises exponentially with the time the code branches apart, this sounds reasonable. Thus DONT branch, continuous integration is a practice that demands all development exists within a single branch and is a set of practices that make sure this can occur in a controlled fashion.

This is clearly unrealistic, but its a good aim. The following guidance should be considered when branching.

1.Commit code every day, uncommitted code does not exist.
2.Each commit should contain a complete unit of work.
3.No commit should EVER break the build.
4.Each commit should be of a size to be peer review-able.
5.Each commit should contribute to a current or future production release

Where the above rules cannot be followed branches should be used consider the following examples:

I.A speculative change that may or not make it into production, put it on a branch until such time as its fate is established, this avoids polluting trunk.
II.A long running disruptive change (e.g. a compiler upgrade), this cannot be committed in small chunks without breaking the build.

Realise that EVERY branch costs developer time so branch little and merge often.

Release Candidate Branch

Just prior to a release a branch should be established to stabilise code prior to a release. This branch is known as the release candidate branch. The release candidate branch should be constructed as late as possible prior to a release. The only code changes permitted on the release candidate branch are bug fixes required for the release, no new features should be added. If new features are required the branch should be abandoned and re-established. Changes to the release candidate branch should only be made by merges from the trunk, thus the fix should be made to trunk and merged UP to the branch.

Pollution of the release candidate branch

A common critique of the single trunk approach is that code destined for later releases will be released ahead of time, release candidate branches are often made earlier to ensure that such code 'pollution' is avoided. Indeed often branches are created specifically to avoid this 'pollution'. I would assert that this pollution is good, it reduces the testing burden and delivers higher quality software faster though the development group should be aware of the constraints they are working in an must adjust to fit.
Consider an example.

A development team is working on a web browser called aluminium :-)

DeveloperA is adding flash support.
DeveloperB is adding javaScript support.
Clearly both these features hit the renderer, naively a task branch would be created for each and as the code is completed it be merged to trunk/main/head ready for release.

Consider the testing efforts, DeveloperA has to test on his branch, when tests pass he needs to retest on trunk/main/head as he needs to validate the merge did not break his code. A release can now occur. When we look to release the JavaScript work again this must be tested on the branch and then again on main, we must of course also regression test the flash work to ensure that the JavaScript changes have not affected the code base. All told the application must be UAT tested 5 times to release these 2 features.

If you flatten the work above onto a single branch then the merges occur each day. In this situation the flash testing is done in the same code base as the JavaScript code base, aspects of the JavaScript code base exists on the Release Candidate but the acceptance tests pass so the Java Script code is not affecting the correct running of the application. Problems in the first example will certainly also exist in the second but they will be found early when they are cheap to fix. Testing will NOT be duplicated. It is clear that DeveloperB may have to 'hide' aspects of his code so the functionality is not released half completed, this can be simply achieved by suppressing menus or using compiler pragma to remove aspects of the code unless certain properties (DEV=true) are present.


What constitutes a build?
Get Source
Compile application
Run tests
Inspect software
Build release package
Deploy release package as if to production

Build in CI is more than might be considered within 'traditional development'. The process above is followed for every software change, not just for 'special' release builds. If you do a build ever hour you can be pretty certain it will work when you need to-do your release. If you only ever execute the release scripts once a month, GOOD LUCK!

1.Gives confidence in release process, reduces fear enables smaller development cycles and quicker time to market.
2.Improves confidence of developers to be able to make changes (especially hard changes that may break things).

If the benefit (to the developer) of CI could be mostly easily spelt out it would be in taking back control of the source tree, the ability to make changes with confidence that the effects of the changes can be managed. The remainder of the document will pick out the stages of the build and demonstrate how by automating this and by adopting appropriate development practices we can gain confidence in change.

Clean, Get Source

Automated builds must occur on a clean machine, access to this machine should be tightly controlled and changes to it should be subject to version/audit. Each build should be from clean, to ensure that the builds can be repeated and that no side effects from previous builds are carried forward.
Compile Application. The build server builds should be identical to the builds that occur on developers machines. The application should compile with no errors or warnings. Any warnings that exist on the build machine should fail the build, warnings that are acceptable should be acknowledged by compiler pragma to suppress them for the relevant line of code.

Run Tests

Why test? The answer seems obvious but the reality is more interesting, in simple terms testing is performed to ensure that an application does what we think it does but the benefits for a programmer are more profound.

Make sure software works
Make sure software keeps working (which of course enables change)
Show other developers how to exercise your code (and how not too), tests are often a great source of documentation.
Test harness to enable debugging of subsections of code.
Validate bugs have been resolved (and stay resolved)

Types of test

Unit tests

Smallest testable part of an application, the best unit test holds all functions/features/resoures not being tested constant. Where dependencies exist (high coupling) then either refactor to remove them or use fakes/mocks to eliminate. Unit tests should be very fast, thousands running in a few seconds, this is important to ensure that all tests are run at each build. Isolate expensive modules with fakes/mocks.

Mock / Fakes, these objects implement the same interface as a 'real' object and are used to isolate unit tests from other aspects of the system. Fakes are concrete objects that deliver a canned response. Mock objects are active objects that imply their own assertions (methodX must be evaluated prior to methodY) to more realistically portray the 'mocked' object. Mocks/Fakes can be hand crafted or use one of the many reflection based API's (e.g. Easy mock).
Integration tests, End to end testing of modules to deliver business value, often referred to as black box testing as little or no code is adjusted (or assumed) within the test. Will not be evaluated on every build but will be evaluated prior to each release.

Regression tests

A special set of integration tests designed to ensure that the business value delivered by software does not change over time. Will not be evaluated on every build but will be evaluated prior to each release.

Test Driven development

TDD cycle

Add test
Run all tests and see failure
Write code to make test pass
Run tests to see pass

Why do this?

KISS, YAGNI – develop ONLY what is needed, focus on the prize!

Three rules of test driven development
1.You are not allowed to write any production code unless it is to fix a failing test.
2.You ar enot allowed to write any more of a unit test than is sufficient to fail.
3.You are not allowed to write any more production code than is sufficient to cause a failing test to pass.


Coverage is usually expressed as series of measures:

Function coverage - % of functions executed.
Statement coverage - % of statements executed.
Condition coverage - % of branch choices evaluated.
Path coverage - % paths executed
Entry/Exit coverage – % of call/return evaluated.

Coverage is a measure of the quality of testing, developers should strive to 100% coverage but in reality its impossible to achieve (a module with n decisions has 2n paths,
loops can result in an infinite measure). An approximation of sufficient path coverage can be found by considering cyclometric complexity.
Inspect Code

Apply rules to the code to ensure it complies with standards established by the development team leads. A variety of tools exist to perform static and dynamic code inspection, some of the measures are listed here:

Source Lines of Code (SLOC)

The number of lines of code, this is a good measure of effort but a terrible measure of functionality as a good programmer will often implement more functionality with less lines of code.
“Measuring programming progress by lines of code is like measuring aircraft build progress by weight” - Bill Gates.
There are logical and physical SLOC figures based on programming style:

for (int I = 0; I < 10; i++) System.out.println(“Count: “ + i);


for (int I = 0; I < 10; i++)
System.out.println(“Count: “ + i);

Cyclometric Complexity

Developed by Thomas McCabe, measures the number of linearly independent paths through program source code
if (I > 10){.. } else{.. }
would have a complexity of 2. Cyclometric complexity is a measure that directly affects the quality of code, lower complex code is easier to maintain and test. The complexity figure is valuable to QA as it gives an indication as to the number of tests that should be executed. There does exist a minimal complexity for a given language/algorithm but its rare that any code is expressed to that complexity therefore programmers can frequently improve quality just by looking to reduce this measure.

Cohesion & Coupling

Cohesion is a measure of how strongly-related and focused the responsibilities of a software module are.
Coupling relates to a relationship where one module interacts with another, there is low coupling if the interaction is via a well known interface without dependance on internal state.

Code that has high cohesion and low coupling is easy to maintain and understand:

Changes in one module should not cause ripples into other modules
Modules are easy to understand /develop in isolation.
Modules can be easily re-used.

Build Release package, Deploy release package

Donut Rule
NOONE ever breaks the build, the term used by many CI/XP developers is 'in the green' referring to the green bar that is shown when all tests pass. Every time the build is broken it should be fixed at once and the guilty developer should purchase sugar covered goodies for the rest of the team!

VERY little of the above is my own work. I read a lot and most of what's above is the words of men/women smarter than me.

How to setup an N95 as a 3g modem for a mac

All the info you need is here: (ross barkman's home page).

Firstly download the vodafone scripts (scroll down I used the link (Nokia 3G scripts). Unzip the file and copy the three CIS files into /Library/Modem Scripts)

Now link your phone to the mac using the bluetooth wizard (make sure you select the use my phone as a modem checkbox on the last page)

Finally you will need the access point info again ross is there for us scroll down. I am using o2 so the details are user faster password web AP THere is some info about SMTP and so forth but as I use gmail I have not used that.

The Mayonnaise Jar and 2 Cups of Coffee

Its spam .. but I kinda like it

When things in your lives seem almost too much to handle, when 24 hours in a
day are not enough, remember the mayonnaise jar and the 2 cups of coffee.

A professor stood before his philosophy class and had some items in front of
him. When the class began, he wordlessly picked up a very large and empty
mayonnaise jar and proceeded to fill it with golf balls . He then asked the
students if the jar was full. They agreed that it was.

The professor then picked up a box of pebbles and poured them into the jar.
He shook the jar lightly. The pebbles rolled into the open areas between the
golf balls . He then asked the students again if the jar was full. They
agreed it was.

The professor next picked up a box of sand and poured it into the jar. Of
course, the sand filled up everything else. He asked once more if the jar
was full The students responded with an unanimous "yes."

The professor then produced two cups of coffee from under the table and
poured the entire contents into the jar effectively filling the empty space
between the sand. The students laughed.

"Now," said the professor as the laughter subsided, "I want you to recognize
that this jar represents your life. The golf balls are the important
things--your family, your children, your health, your friends and your
favourite passions--and if everything else was lost and only they remained,
your life would still be full.

The pebbles are the other things that matter like your job, your house and
your car.

The sand is everything else--the small stuff. "If you put the sand into the
jar first," he continued, "there is no room for the pebbles or the golf
balls. The same goes for life. If you spend all your time and energy on the
small stuff you will never have room for the things that are important to

"Pay attention to the things that are critical to your happiness. Play with
your children. Take time to get medical checkups. Take your spouse out to
dinner. Play another 18. There will always be time to clean the house and
fix the disposal. Take care of the golf balls first --the things that really
matter. Set your priorities. The rest is just sand ."

One of the students raised her hand and inquired what the coffee
represented. The professor smiled. "I'm glad you asked."

"It just goes to show you that no matter how full your life may seem,
there's always room for a couple of cups of coffee with a friend "

Gok rules!

I have just gocked my wardrobe, I have thrown out all of the clothes that dont fit me or I dont like. I have managed to throw out 4x bin bags! I think Gok is right, its just damn depressing looking at clothes that dont fit!.

Lovely exchange

While out today I heard this conversation (between an american and an english lady) about a fountain which was uncovered.

US chap: That would never be allowed in the US, someone could fall in and hurt themselves
UK lady: In the UK we just don't fall in.

Setting up trac on OSX

Yeowch that was harder work than it should have been.

Firstly I have installed 'stuff' before via port,  NEVER again (well almost never).  Darwin port installs apps into /opt/... it will manage the dependencies of an installation (but removing an app does not remove the dependencies).  I had managed to get several versions of python installed, a seemingly harmless situation, more of that later.

I already had subversion installed and integrated with apache2 (per my previous post).  To install trac I had to add a few dependencies:

Firstly add setup tools ;
$ wget
$ sudo python

Here game the problem, as I had installed something via port that installed python (and I had /opt/local/bin in my path) it was installing into port's python's site directory.  As _www is the user that runs apache, it was running python from osx's default thus none of these dependencies were installed.    The solution was to add /opt/local/bin to the END of my path so that port apps are picked up only if there is a non osx version and to rerun the installs.

pysqlite-2.4.1 - (

$ python build
$ sudo python install

Genshi & Pygments

sudo easy_install Genshi
sudo easy_install Pygments

You may like to install clearsilver (I did not) its required for some trac modules.  I may get to this later.

Now to install trac download trac from here:
and extract and install: sudo python ./ install

This will put the relevent modules into the site directory and some scripts into /usr/local/bin (make sure this is in your path).

Now to setup trac,  I have a single svn repository /usr/local/Subversion/Projects so to match this I created a Trac project /usr/local/Trac-Projects/Projects

$ sudo trac-admin /usr/local/Trac-Projects/Projects initenv

And follow the prompts

At this point you can run trac server directly

$ tracd --port 8000  ../../Trac-Projects/Projects/

This allows you to test the installation,  if you want to run trac in this way look at for instructions as to how to set it up via lanchd.  I wanted to use apache2.

So to /etc/apache2/other (again)

create a file called trac.conf containing:

LoadModule fastcgi_module libexec/apache2/

# Enable FastCGI for .fcgi files
<IfModule fastcgi_module>
    AddHandler fastcgi-script .fcgi
    FastCgiIpcDir /var/run/fastcgi

ScriptAlias /trac /usr/local/Trac-0.11b2/cgi-bin/trac.fcgi
FastCgiConfig -initial-env TRAC_ENV=/usr/local/Trac-Projects/Projects"

<Location "/trac">
    SetEnv TRAC_ENV "/usr/local/Trac-Projects/Projects"

<Directory "/usr/local/Trac-0.11b2/cgi-bin/">
    AllowOverride None
    Options None
    Order allow,deny
    Allow from all