The Empirical Manifesto - taking continuous improvement seriously

Recently I have been using the scientific method in a small team in a large program. When I say using it I mean all the time.

Previously I paid lip service to an empirical approach, but I decided to make it the base of everything the team do. In this I was quite lucky that the team I'm in were keen and immediate management didn't resist. I am working in an environment where we are a snow-plough for a whole program. We don't know what does work, wont work, how to do things, where to sit etc. May sound familiar. 

Our response was to Guess, Test and Refine really rapidly.
Scrum says Inspect and Adapt, but this is not enough, in fact it isn't science. This is one of the reasons I was not getting hard evidence for what I was doing and sometimes had to persuade stakeholders to go along with what the team were saying - "Just trust us, we've done this many times before". Clearly this is subjective and open to challenge and rightly so. It often means change 
The Scientific method demands a hypothesis, objective tests, analysis of the results and a willingness to accept that results may say that you are wrong. 
If you inspect and adapt you will only inspect what you are doing not what you think might work.

The best definition of the Scientific Method is ...

I like this summary of a Richard Feynman talk:

Richard feynman on the scientific method in 1 minute

This is an excerpt:
"If it disagrees with experiment, it's wrong...."
"In general, we look for a new law by the following process. First, we guess it (audience laughter), no, don’t laugh, that’s really true.Then we compute the consequences of the guess, to see what, if this is right, if this law we guess is right, to see what it would imply and then we compare the computation results to nature, or we say compare to experiment or experience, compare it directly with observations to see if it works. 
Wikipedia it gives a rather convoluted definition of the Scientific method. Here is a guideline from wikipedia:
  1. Define a question
  2. Gather information and resources (observe)
  3. Form an explanatory hypothesis
  4. Test the hypothesis by performing an experiment and collecting data in a reproducible manner
  5. Analyze the data
  6. Interpret the data and draw conclusions that serve as a starting point for new hypothesis
  7. Publish results
  8. Retest (frequently done by other scientists) Scientific method

What were the results?

Pretty amazing - we powered on, lost people gained people, changed everything. Now we are a very small team.

After a few weeks we had tens of ideas, tests and results, 

Some hypotheses were:
  • We can check in code, build and view the application in the environments provided - false
  • The business will embrace BDD - true
  • The business will be able to pick up Gherkin through a single interactive workshop - false
  • We can secure an area in the building for ourselves without asking permission - true
  • We can set up our our repo and no-one will stop us - true
  • We can ask everyone to work in our slightly dodgy location - true
  • We can get a UX person who can code - false
  • We can get a UX person who can check in a Design Pattern Library - true
  • A Scrum of Scrums is beneficial - true
  • A Brown Bag is beneficial - true
  • We can get confirmation of an architectural constraint in one week - false
  • We can get a stub of a service in one week - false
  • We can take 2 monitors per desk even though one was assigned - true
  • A Scrum guild works better than asking for a coach - true
  • We can ask to spend our own money to overcome our issues then expense it - false
  • We can set up our test framework with a "tester" in the team - true
  • We can spend the money anyway, be a little poorer and be better at our jobs and happier - true
There we loads of follow on from the "falses" and for that matter the "trues"

A couple of remaining hypotheses are:

  • There is a relationship between complexity measured in points and number of BDD - intial results look positive
  • We can run our keep our CI performance up using a single T2.Medium EC2 - first tests being run
  • We can develop full stack without a dedicated F/E developer and have an extra full stack developer instead - looks like a true but with 
  • We can optimise our tests and run in CI without throwing more kit at it without some assistance from an expert - about to be called as a false

Future tests to develop a general theory include:

  • We can stop estimating in hours and use another measure of progress (gherkin scenario count)
  • We can derive story points from count of acceptance tests

The empirical manifesto

I thought I would have a go at this, apologies to the Agile Manifesto

We keep an open mind to new ideas and embrace the scientific method as a route to the truth and a window on falsehood. 
We value:

Ideas over dogma
Failure over ignorance
Tests over assertions
Results over authority

That is, while there it is sometime expediant to work with the items on the right, we value the items on the left a whole lot more.

In summary

It works, do it, don't believe in anything just test it. If you get false then think again, if you get a true come up with a follow on idea. Use the Scientific Method to improve and never stop improving

Further watching and reading

Thanks to: and most of all to Richard Feynman here on TED

Also look up Ricardo Semler TED talk if you reaaly want to get into this.


Popular posts from this blog

In sprint cumulative flow in Jira

Value vs effort - a visual approach

Backlog prioritisation - Making hard choices