March 15, 2021 6:21 pm
Good numbers and facts are an essential ingredient in most B2B tech marketing content. They help ground often high-level thought leadership concepts in reality. Whether it’s for a white paper, a blog or an infographic, having up-to-date, relevant and well-sourced statistics and market research brings authority and credibility to your content.
That sounds like an obvious statement, but you’d be surprised how many people either a) don’t bother to include this kind of information in the first place or b) do include it but never bother to verify the accuracy, date and source of the information they are citing.
Ask yourself: How confident are you that any numbers, percentages or other research findings you have quoted in your recent marketing material are bona fide? Do you know where those numbers originally came from? Do you know how old the research they supposedly came from is? Was that research or survey from a credible source or statistically valid sample size?
Without answers to those questions, the result can be lazy content creation. And it’s why you often see the same old stats rolled out time and time again around the big enterprise tech topics such as AI, big data, cloud and cybersecurity. You see them repeated and quoted in everything from tech vendor keynote presentations to blogs and social posts.
People see a stat quoted by someone else, assume it must be correct and then use it in their own material. Someone else sees that material and also borrows that stat for their content. And this goes on and on, often for many years until the provenance of that data is long forgotten. It’s just taken for granted.
Let me give you an example. A subject matter expert at a tech vendor had quoted a well-used stat about the increase in the amount of data people generate each day to emphasise the explosion of big data. A quick Google search showed that the stat had been cited in a lot of material. But it got interesting when I tried to dig further and verify the original source of the stat and the date the research was published.
Most people were just using the stat from secondary sources – a curated list of big data stats and a big data infographic – that did not identify the original research source. More worryingly, I couldn’t definitively identify the original source of the stat. One of the earliest uses of the stat was in a now long-deleted blog by a big tech vendor. Another possibility was that it came from the keynote speech of a different big tech CEO.
However, neither of those possible sources matched up perfectly and neither was backed by a specific piece of research. And, significantly, both were over 12 years old. So even if the numbers were correct, they were well past their sell-by date and no longer credible.
Your content and your brand depend on credibility and authenticity. In the example above, we were able to help a client avoid repeating a dubious and – by tech standards – ancient stat. In fact, they actually gained credibility by identifying a problematic bit of information that so many others had failed to question.
When we come across cases like this, we’re able to use our journalistic research skills to find a properly sourced, recent and relevant number. That’s because we’re all mostly tech journalism veterans. We have media backgrounds where a fact-checking rigour has been instilled into us through our many years in the profession. So it goes against our very nature and inquisitiveness to simply accept a number or piece of data at face value without checking its accuracy and the source. If your organisation ever needs help with something like this, please get in touch.
Follow us on Twitter – @ColContent