Main content

Generative AI and diginomica - where we stand

diginomica team Profile picture for user The Team May 10, 2023
While others extol the virtues of generative AI in content creation, diginomica draws a line in the sand.

generative AI

Since diginomica was founded in 2013, we have done our utmost to demystify the impact of emerging technology on the enterprise. Generative AI is different. 

The surge in adoption of Large Language Models (LLMs) poses profound questions about the future of editorial content - not the least of which is the potential erosion of reader trust. 

The diginomica editorial team will continue to document the undoubted potential, as well as the pitfalls, of all forms of AI on the enterprise. We have always felt that AI can aid in transformational projects - but only if the ethical questions of data privacy, bias and governance are thoroughly addressed. 

However, we are drawing a firm line in the sand when it comes to the use of generative AI in our own content creation. Other media outlets enthusiastically proclaim their plans to explore the production of content via LLMs. Our readers are rightly wondering, given our in-depth coverage of this technology, if we would ever use these tools to write diginomica content.

The answer is a resounding no!

Context lacking

LLMs are a powerful way to generate text - but generative AI lacks the real world context and informed opinions that are core to our editorial approach. Nor does generative AI possess the ability to parse facts from the automated assembly of possible facts, mixed with the occasional falsehood. 

From diginomica's inception, we have always steered away from the volume-content-for-SEO publishing model, in favor of quality content for a dedicated readership. We’ve intentionally avoided the ad-supported UX cesspool that supports the page volume model. 

Generative AI is no exception. Therefore, timed with our ten year anniversary and an ongoing responsibility to reader trust, we commit that the diginomica editorial team, including freelance contributors, will not use LLMs in the writing of any content on Not now, not next year, not ever. 

Use cases

This is not to say that machines will not play an important role across disciplines. There are, as we continue to document and track, a growing number of positive use cases in particular disciplines and business sectors where AI, generative or otherwise, will clearly play a disruptive role in changing the way the world works. And such technology will be embedded in other, more established tech, with potential for genuine benefits to accrue. 

Journalism is clearly impacted here. We support our writers using machine tools for spell and grammar check. A number of our team use machine-generated transcripts of interviews. This is as it should be - writers should absolutely use enabling tools, AI or otherwise, that help them craft better stories (with better spelling!). 

But when it comes to the creation of actual words on the screen, the human-crafted element of that must be absolute. At a time when society as a whole is rightly questioning what content can be trusted, we want them to know that diginomica's, however imperfect it may sometimes be, is written by domain experts with a passion for the enterprise - and a willingness to put their viewpoints out there. 

We may have an occasional factual reporting error - which we will always correct immediately - but we won’t be publishing any hallucinations from LLMs, because we will not be using them to write our content.  


As one of our exceptional contributors Chris Middleton put it recently on social media: 

To be human is to be opinionated. So, be that person. Be what GPT can’t be without scraping your content.

If anything, you will see diginomica in our second decade become more human, more opinionated, more determined to demystify tech hype with real world commentary.

To be human is to evaluate an enterprise project or technology or vendor, and take a position. We take these positions often. To be human is also to share methodologies for better project results, honed by 'do and don’t' conclusions based on firsthand experience.

LLMs cannot realistically take definitive positions on technology, and explain how they got there. At best, LLMs can simulate positions in a debate. But that’s a computing exercise, not a roadmap for decision makers and Informed Buyers. LLMs don’t have their own real world experiences to draw on. They don’t go to events and raise pesky questions at press conferences. 

As Middleton points out, what gets called “AI” often isn’t. GPT, as he asserts, is a “derivative work generator,” not true AI. At diginomica, we will continue to seek to define what constitutes AI with as much precision and supporting data as we can find. 

Obviously, the use cases will evolve. At some point, years from now, we may have to revisit our position on LLMs as their capabilities become more sophisticated. But that day is not now - nor is it anytime soon. We believe readers need an unequivocal position from us on this subject, one that provides reassurance around delivery of trusted content.

Now you have it.

A grey colored placeholder image