AI and Journalism — The Real Danger

Michael Rosenblum
4 min readDec 19, 2023
Image courtesy Wikicommons

Artificial Intelligence represents an enormous threat to society in general and journalism in particular, but not in the way that you imagine.

Deeply influenced by our addiction to movies and video, we perceive the threat of AI through the lens of HAL in 2001, or Colossus in The Forbin Project — computers that take control of the world as we become slaves to our robot masters.

This notion of the creation overpowering the creator is as old as Frankenstein, and about as factually correct.

The truth is that Artificial Intelligence is neither artificial, nor is it particulary intelligent. But it is dangerous.

We may fear the arrival of AI, but we have already been living with it for some time. In 1961, Gordon Moore postulated that computer processing speed would double every 18 months as the cost would halve. Rather remarkably, Moore’s Law, as it is now called, has proven incredibly prescient and accurate.

All that we do now is based on the speed of the microprocessors that inhabit every device we own, from our toasters to our smart phones. When you watch a video, for example, you aren't really watching a video — this is a remnant of our long relationship with physical film and analog projection. What you are watching, rather, is an incredibly fast stream of data. That’s why it’s possible for you to so simply edit your own video on your phone and move it around the world with such incredible speed and alacrity.

One of the first applications of AI was in the realm of translations. In the 1950’s, as computers first began to surface, simultaneous translation was sought as a goal of computer speed. Armies of linguists were employed to analyze just how languages worked — structure, grammar and so on. There must be a key to this — a basic law of language.

Instead, as computers got faster and the ability to crunch big data became a reality, a far easier way of creating simultaneous translation was found. In Canada, which is by law a bi-lingual country, all of the never-ending debates in the Canadian Parliament were recorded both in French and in English. Thus, an aggregate data bank of more than 100 years of raw translation work done by humans could, in theory, be fed into a computer and based on very very fast comparisons, French could instantly be turned into English, and the reverse.

As it turned out, this worked, and this fast crunching of big data is now the basis of all instant translation software. Not really intelligent, but rather very fast slicing and dicing of existing data — data accumulated by human beings.

Because processor speeds are so fast now, translation software is able to continually comb the web for more and more examples of human-made translations and add it to their ‘store of knowledge’.

That is AI at work. Not intelligent. Just data crunching. Taking what people have done, slicing and dicing and putting it back together with phenomenal speed.

ChatGPT does exactly the same thing.

Ask ChatGPT to write an article about Israel and Palestine with a focus on X, Y and Z, the bots will comb the existing body of work online, slice it and dice it and remix it and spit out a very readable ‘new’ version just for you.

But it is wrong to call this journalism. The correct term here would be plagiarism.

AI and ChatGPT are nothing but very high end plagiarism.

The fact that existing journalism companies are not just allowing ChatGPT articles to be published, but in fact are embracing this ‘new’ technology should give us pause. Embrace ChatGPT, Don’t Fear It.

The problem with AI generation of creative content — and this is not just limited to writing — this also includes art, music and soon video and movies — is that all AI is capable of is searching and repurposing what human beings have already done.

Of course, it is incredibly fast and cheap — far cheaper than employing a writer or a musician. But what AI produces is just restructured content that has already been created by someone else.

First, of course, the original content creator receives neither credit nor payment. The work is quite simply stolen.

Second, and I think the real danger of AI and ChatGPT is that as nothing new is created and the same old stuff is just recycled, albeit in a perhaps more polished way- is that in the long run we are creating a kind of intellectual cul-de-sac in which nothing new — no new ideas, no new thoughts, no new approaches will ever be explored.

That’s the real danger of AI — not robots taking over the world, but rather the death of creativity and originality.

In 2001, Bowman fights against Hal for his life. In the world of ChatGPT, no one will care enough to do even that.

--

--

Michael Rosenblum

Co-Founder TheVJ.com, Father of Videojournalism, trained 40,000+ VJs. Built VJ-driven networks worldwide. Video Revolution. Founder CurrentTV, NYTimes TV. etc..