Reading recent articles and using AI tools has led me to reflect on the need to formulate my own editorial policy. And be fully transparent with my readers.
In the closing paragraph of a recent article on AI published by The Economist, Noel Harari wrote
«We have just encountered an alien intelligence, here on Earth. We don’t know much about it, except that it might destroy our civilisation. We should put a halt to the irresponsible deployment of ai tools in the public sphere, and regulate ai before it regulates us. And the first regulation I would suggest is to make it mandatory for ai to disclose that it is an ai. If I am having a conversation with someone, and I cannot tell whether it is a human or an ai—that’s the end of democracy.
This text has been generated by a human.
Or has it?»
Who are we talking to?
The issue of who are we talking to or who has written an article was picked up again this weekend in a “Letter from the editor on generative AI” by Rould Khalaf published by the Financial Times.
In her central paragraph, she wrote:
“The FT is also a pioneer in the business of digital journalism and our business colleagues will embrace AI to provide services for readers and clients and sustain our record of effective innovation. Our newsroom too must remain a hub for innovation. It is important and necessary for the FT to have a team in the newsroom that can experiment responsibly with AI tools to assist journalists in tasks such as mining data, analysing text and images and translation. We won’t publish photorealistic images generated by AI but we will explore the use of AI-augmented visuals (infographics, diagrams, photos) and when we do we will make that clear to the reader. This will not affect artists’ illustrations for the FT. The team will also consider, always with human oversight, generative AI’s summarising abilities.”
Since the launch of ChatGPT and generative AI image apps, many of us have all been curiously exploring these tools surprised by their speed and multiple abilities.
Personally, I view ChatGPT as a “personal assistant” and sparring partner which helps me broaden my knowledge on topics of interest, speeds up my thinking, and helps me write better text. However, we must be aware that they can also become a powerful tool in the hands of “charlatans” (I loved the term) as Alex Fergnani recently called them in a post about his own profession.
A transparent policy
I strongly believe that when we produce content we must be forthcoming about our use of AI tools. Hence I will now adopt an FT-style transparent editorial policy, and invite you to do the same. I will indicate at the bottom of my posts which AI tools I have used (if any) and for what purpose.
This post was written with no assistance from AI tools.