Introduction

There’s little doubt that Board and Committee papers have improved over recent years, but familiar challenges remain  – too lengthy and you are swamped in information; too brief and you have to read between the lines to see what’s not there.

And then you have the problems of confirmation bias, group-think, deliberate “steering” and the rest, all of which can add up to a Corporate Governance issue and, in some cases, a Section 166 report.

Inevitably, there’s a great deal of subjectivity in Board materials, and so in theory data should be your friend, as it were, being both objective-ish and precise-ish. But practice and theory are not the same…

Some of the traps are obvious…

Sometimes data is presented that looks unlikely – financials that show a remarkably consistent improvement; complaints that always flat-line; data that looks, to the experienced eye, too sanitised to be true.

Or data may be presented with an explanatory narrative that’s at odds with the picture the raw numbers would tell, except the numbers are inaccessible by their sheer volume (a “snow job”), whereas the narrative tells a readable and encouraging story.

Forward-looking metrics are difficult. It’s not unusual to see data projections that show a distant, but very encouraging end state (the “hockey-stick”). However, if limited interim targets are set and reported upon, that makes cause-and-effect considerations of future activity all but impossible.

Even simple, widespread and “uncontentious” means of data reporting – traffic lights – can mislead.  My green might be your amber. My green might represent what I think the end state will be, not the status right now (red, possibly). And some organisations seek to create nuances in RAG reporting – yellow Vs amber; red Vs purple – whose meaning might elude some readers (in a household-name, national financial services firm I work with, purple represents a matter that is stuck on red, but nobody thinks it’s important).

And our old friends – papers that are imprecise, or which just don’t seem to make sense or say a great deal – are too common.

Some of the traps are less apparent…

The biggest data traps, I believe, are in working out what’s not there at all, but should be.

For example, in public companies, mutuals, and large public enterprises, it’s possible that the measures of most interest to investors, members or key stakeholders are not the ones the Board spends its time on. This is because Board reporting calendars can quickly become immutable, and that can lead to a misalignment of interest with others over time, that may go unnoticed in the short term.

Annoyingly, some measures, meant to help, can inadvertently have the opposite effect.

  • The “streamlining” of packs through templates and similar can lead to the omission of significant outliers, the omission of important counter-narrative data, and the production of summaries that end up materially biased (the bias not being readily apparent to the reader, given the thinness of supporting information presented).
  • The popular, “reporting by exception only”, can create a contextual void for readers not as steeped in the minutiae of the business as the executive, and thus significant, unreported secular trends may be missed.

And of course, the volume of fiduciary and regulatory reporting may cause backward-looking metrics to become the norm, so forward-looking material  and papers which deal with the growth/value challenge can become thin on the ground.

Some solutions…

  • It may be worth reviewing your organisation’s reporting guidelines, to check that these are not inadvertently contributing to the problem, by causing authors to self-censor material to fit templates and document length restrictions.
  • Some organisations create a single database of information across the enterprise, so that everyone is working from the same (verified) information, meaning outliers and other “inconvenient” data is not forgotten, but this sort of enterprise-wide solution is a major undertaking.
  • For some entities, it could be insightful to capture “unconventional” data, such as Net Promoter Scores from customers / patients  / etc; rankings produced by 3rdparties (eg Dr Foster, Glassdoor); summaries of analyst notes on the sector and investor calls from competitors. These can capture information that conventional reporting might overlook and which can provide a valuable complement to what is being reported.
  •  If there’s any possibility that explanatory narratives and their data may be divergent, then it can be helpful to have the data placed in an Appendix (or made otherwise accessible). If NED’s occasionally review the numbers, with a highly sceptical mindset; if as a result, they are able to make the numbers tell a markedly different story from that reported, then you know that confirmation bias (or worse) is at work.
  • Of course, the old saw “a picture is worth a thousand words” can be true for data, too. It’s commonplace to see unenlightening tables of numbers populate reports, when other forms of representation: graphs, heatmaps, diagrams – indeed tables where the key information is highlighted in some way – can go a long way to making information clearer, and insights more obvious.

And, ahem, teaching people how to plan documents properly, and how to write “short” goes a very, very, very long way to effecting improvements.

As always, there’s much to do and time is short, so good luck and get cracking.