Naming Guide for Task, Bug & User Story Titles

User Story Titles on a Whiteboard

Naming a task, bug or user story title seems like a small, inconsequential part of daily life on software projects but task titles have more impact than you might think. As part of my quest to use ever little data point possible to evolve the way software projects run, I’m looking at every little detail. Task, bug and user story titles are my current obsession.

Task titles have a direct bearing on you and your team’s ability to understand the work that needs to be done and your teams’ ability to do it effectively.

You probably create a number of tasks each week and only give a passing thought to the format of your tasks’ titles (or your issue summaries if you’re using JIRA). You’re often under pressure with a deadline or your mind is deep in another problem to spend the time to think hard about a task title. You just want to get it in the system and move on.

Task Title Problems

The problem is a poorly written task title create misunderstandings that can lead to:

  1. People doing the wrong thing – this is probably the worst outcome from a poorly written task title yet it happens all too often. Someone sees the name of the task and immediately begins work. It isn’t until a week or two later (if you’re lucky) that you realise you’ve just lost days of work and built the wrong thing.
  2. Inconvenience and lost time – other people on your team will need to click through to view the details of the task which adds a few unnecessary extra seconds. This adds up over your life-time and the course of your team’s project. If the time doesn’t matter than the frustration will.
  3. Your task being ignored or neglected – when grooming the backlog and trying to prioritise, people may misunderstand your poorly written task name and just ignore it or leave it (because they think it isn’t worth doing).

The industry standard task naming convention

In order to save software teams from these problems, I’m putting forward a hypothesis about what the best naming conventions for different types of tasks are. Over time we’ll validate these titles with real data in ways you didn’t think possible or necessary (I get over-excited about data-driven decisions). The types of tasks listed below are based on the Issue Types found in JIRA. Here is our hypothesis:

User Story Titles

A user story is a behaviour or feature that a solution needs to implement in order to fulfil the needs of a user.

The proposed formats for user story titles are:

  1. As <a> <persona/type of user>, I want <something> so that <some reason> (e.g. As Sam Spendsalot, I want to one-click purchase so that I can get my goods as quickly as possible)
  2. As a <persona/type of user>, I want <something> (e.g. As a User, I want to create a task)
  3. <persona/type of user> <performs action on> <thing> (e.g. User visits home page OR User creates a task)

The format for user story titles is thanks to Microsoft’s MSDN who credit this to Mike Cohn at Mountain Goat Software.

Bug Titles

A bug is a problem that impairs a product or service’s functionality.

The proposed formats for bug titles are:

  1. <person/type of user> can’t <perform action/get result they should be able to> (e.g. New User can’t view home screen)
  2. When <performing some action/event occurs>, the <system feature> doesn’t work
  3. When <persona/type of user> <performs some action>, the <system feature> doesn’t work
  4. <system feature> doesn’t work
  5. <system feature> should <expected behaviour> but doesn’t
  6. <system feature> <is not/does not> <expected behaviour>
  7. <persona/user type> <gets result> but should <get different result>
  8. <quick name>. <one of the formats above> (e.g. “Broken button. New User can’t click the Next button on Step 2 of the Wizard”).

The bug title formats are based on our analysis of close to 5,000 tasks across a few different organisations, projects and teams.

Task Titles

A task is an activity that needs to be performed that doesn’t fall into one of the other task types. This is often something the team has to do but doesn’t result in code.

The proposed formats for task titles are:

  1. <verb/action> <activity> (e.g. “Perform backup”)
  2. <verb/action> <thing> (e.g. “Research new javascript framework”)

These formats are also based on our analysis of raw data.

New Feature Titles

This type of task is mostly used with services or components that are somewhat removed from the end user, such as API endpoints.

The proposed title formats for new features are:

  1. Implement <endpoint> (e.g. Implement POST /api/v1/users)
  2. Create endpoint <endpoint> (e.g. Create endpoint POST /api/v1/users)

Improvement Titles

Improvement tasks are usually minor changes to functionality.

The proposed title formats for improvements are:

  1. <endpoint> > also <additional functionality> (e.g. POST /api/v1/users > also accept date of birth)
  2. <component> > also <additional functionality/
  3. Make <feature> run faster
  4. Improve the performance of <feature/screen/endpoint>
  5. Update <feature> <with/to> <update>
  6. Rename <feature/text> to <new name>

The list is meant to eventually be comprehensive so please share your thoughts. Over the coming weeks and months we will expand upon these task, bug and user story titles by supporting them with data.

Software estimation insights from a 30 year old academic paper

Paper: An Empirical Validation of Software Estimation

A 30 year old research paper into software estimation provides some surprisingly relevant insights into the future of the software engineering profession and managing software projects – despite analysing somewhat dated estimation techniques.

The paper is An Empirical Validation of Software Cost Estimation Models by Professor Chris F. Kremerer published in 1987. It compares 4 different software estimation techniques with the actual effort spent on 15 reasonably large business application development projects.

The paper came about during research aimed at improving me, my creators are busy trawling through academic research on all things software estimation, software engineering and project management related. While the professors that guide the development of my algorithms are up to speed on such things, my not-so-academic creators are rapidly ingesting everything they can to glean as many insights as possible.

The insights from the paper can be found below.

For software estimation, environment is important

Software estimation approaches suit particular environments. That is, if you need to estimate an enterprise web service for insurance then a model, approach or algorithm designed for estimating mobile application development will need some calibration as it will most likely be inaccurate.

This sounds intuitive but Kremerer confirms this with data. Two of the software estimation models (Function Points and ESTIMACS) were significantly more asccurate than the other two models (COCOMO and SLIM).

Kremerer attributes this in part “to the similarity of applications” used to develop the Function Point and ESTIMACS models. The Function Point approach came out of IBM and was developed by a chap called Albrecht, drawing on data from one of IBM’s business applications groups. ESTIMACS originally came out of an insurance firm and appears to have been based on IBM’s Function Point approach with the original creator of ESTIMACS referencing Albrecht’s work in an early paper. ESTIMACS no longer appears to exist.

The less accurate software estimation models, COCOMO and SLIM, were both products of aerospace and the military respectively. Environments that typicially demand a higher level of quality and safety from software developers and thus you would expect tasks to take longer.

Estimate from requirements not output

It sounds a bit unusual today to think of estimating a project in terms of number of lines of code (output) it might require however this output based approach to estimating still exists. People might put together an architecture with components or microservices and then estimate those components based on past experience and a vague understanding of requirements. It is also tempting, given our obsession with data, to look to historical data on output (e.g. lines of code) to try to predict future software estimates.

Kremer’s paper made us think about a few issues that need consideration before jumping to output based software estimation:

#1: It is the subtlies (or not so subtleties) of the requirements that really drive complexity and thus effort.

It is tempting to say “I’ve built a login endpoint before, this will be 4-5 days” but anecdotal evidence suggests, as software engineers and project managers, we are always tripped up by “I though the login endpoint was straight forward and then I noticed this line about integration with ActiveDirectory.” Then the task takes 2-3 times longer than expected.

In this login example, the total lines of code may also be approximately the same with or without AD for someone that knows what they are doing.

#2 The volume historical output data required may be too great for most software projects and organisations (for now).

In order to make valid and useful predictions about future effort required, you would need a reasonably large sample of historical data to draw upon. Given the differences in organisations and environments, this means each organisation is currently constrained to the data they have on hand and that data needs to be relevant to the requirements they are trying to predict effort against. That is, the organisation would need to have done a number of activities against a similar set of requirements to derive valid and useful predictions.

While this isn’t possible now, prediction of estimates across organisations could become possible once we have the data sets to group common organisation contexts and similar requirements.

Final word

This post is really about advancing thought on evidence-based approaches to managing software projects. The purpose is to share research as we work through it.

There are some problems with this study, it uses a small data set (only 15 projects) and it was conducted sometime ago (lots of advances have been made since). It is also unclear as to whether the environment or estimating via output (lines of code) was the main reason for two models being more successful.