In the latest episode of Digital Marketing Answered, Tim Butler, CEO of Innovation Visual, sat down with Professor Paul Watson, Director of the UK's National Innovation Centre for Data, to explore the fundamentals of artificial intelligence (AI) and machine learning. The discussion covered key aspects of AI, including the benefits and risks of large language models (LLMs) like ChatGPT, model bias, and the importance of human oversight in the AI-driven marketing landscape.
Want to listen instead? Find the full interview on apple podcasts here. Prefer Spotify? Listen here instead. Don’t forget to subscribe to our YouTube channel to get the latest Digital Marketing Answered™ updates.
During the conversation, Professor Watson provided a valuable overview of the history of AI and machine learning. While AI has been around for decades, with initial developments dating back to the 1940s, Professor Watson highlighted how significant advances have occurred in recent years due to three main factors: better algorithms, more data, and faster computers.
“If you look back at the history of AI, it really begins around 1948, when the first modern computer was built at Manchester University,” Professor Watson explained. Alan Turing and other pioneers began exploring how computers could replicate human cognition. Although progress was slower than expected, the combination of recent technological advancements has helped AI make great strides.
One of the most significant developments in AI has been neural networks, which mimic the way the brain works by connecting neurons in layers. These networks have evolved from simple models to large-scale systems capable of handling vast amounts of data.
In explaining how neural networks work, Professor Watson provided a useful analogy related to autonomous vehicles: "The image from the camera would come into the first layer of the neural network. As you know, images in computers are represented as grids of numbers—pixels. The numbers flow into the first layer of the network. The neurons take in inputs from the image, apply a mathematical function, and produce a number that they send out to every neuron in the next layer. Each neuron in the next layer does the same—taking in all the numbers, applying a function, and passing it to the next layer. Eventually, after many layers, you get to the output neurons, which specialise in recognising things. In this example, you might have a ‘car’ neuron, a ‘lorry’ neuron, and a ‘road sign’ neuron. The idea is that when you show an image with a car in it, you want the ‘car’ neuron to light up, and the others to remain low. That’s how the system identifies a car”
The conversation shifted towards large language models (LLMs), which have gained significant attention in recent years due to tools like ChatGPT. These models are essentially massive neural networks trained on billions of lines of text to predict the next word in a sentence. But as Professor Watson pointed out, LLMs don't always predict the most accurate word—instead, they choose from a set of likely words, adding an element of randomness to the output.
This randomness, Professor Watson explained, is both a “blessing and a curse.” It can make the output seem creative, but it can also lead to what’s known as hallucination—when the model generates text that seems convincing but is actually incorrect.
Professor Watson recounted a particularly striking example: “There was a large American company that provides legal services based on court case information. Their AI-generated output included references to court cases that didn’t exist, citing a ruling from 2025—a future date!” This is a clear example of hallucination, where the AI generates plausible sounding but false information.
The discussion highlighted the importance of human oversight when using LLMs, especially in marketing. While tools like ChatGPT can help overcome the ‘blank page problem’, marketers must review and refine the output to ensure its accuracy. As Professor Watson advised: “LLMs should be treated as an assistant, not a replacement for human expertise.”
Learn more about how we interact with AI in search from our blog on The Future of AI in Search.
One of the major concerns discussed in the episode was bias in AI models. Machine learning systems are only as good as the data they are trained on, and if that data is biased, the model will reflect those biases. Professor Watson pointed out that this is a serious issue that marketers need to be aware of.
For instance, Professor Watson discussed a company that developed an AI system to rank job applicants based on their CVs. However, the training data came from a company with a male-dominated workforce, which penalised women who had career breaks. As Professor Watson explained: “The system discriminated against women because the training data was based on a male-dominated workforce... This is why it’s essential to carefully consider the data you use.”
Bias is not just a legal or ethical issue—it can also negatively impact brand reputation and lead to inaccurate decision-making. Marketers must ensure that the data they use is representative and fair, and they should regularly review models to identify and correct any biases.
Another area of concern is the legal and ethical use of generative AI tools. Professor Watson discussed how some companies have started using generative AI to produce text, images, and even legal references. However, this has raised questions about copyright and intellectual property.
“There are a lot of ongoing court cases around whether AI companies can use data without permission... Individual artists or journalists may find it hard to fight back, but large companies like Disney, with their legal teams, are starting to push back,” Professor Watson said.
This highlights the need for marketing leaders to be cautious when using generative AI tools. Before incorporating AI-generated content into campaigns, businesses must ensure that they are not infringing on copyright or intellectual property. Failing to do so could lead to legal action and reputational damage.
Read more about HubSpot’s latest AI toolkit in our fascinating blog about Content Hub.
With AI tools becoming more sophisticated, data privacy has become another key concern. Marketers and businesses need to be aware of the risks involved in inputting proprietary or sensitive data into AI systems. As Professor Watson explained: “Anything you input into some large language models could be used to further train the model... You could see your private data resurface in future versions.”
Professor Watson offered two ways to mitigate this risk: either use AI systems that guarantee data privacy or deploy private versions of the tools on your own servers. This ensures that sensitive information remains secure and is not used to train public models.
The key takeaway from this insightful discussion is that while AI and machine learning offer significant potential for marketing, human oversight remains essential. Whether it’s addressing bias, ensuring data privacy, or managing the risks of hallucination, marketers must take a hands-on approach when using AI tools.
As Professor Watson aptly summarised: “LLMs are great for assisting humans, but the risks are too high to use them autonomously without human oversight.” Marketers who embrace AI responsibly can unlock new efficiencies and capabilities while safeguarding their brand's integrity.
At Innovation Visual, we specialise in helping market leaders unlock the full potential of AI in their marketing strategy while still ensuring human oversight and data integrity. Contact our expert team today to learn how we can help you utilise AI responsibly for lead generation, customer engagement, and long-term business growth.
To get in touch with Professor Paul Watson directly, please contact him through the National Innovation Centre for Data. Find the case studies Paul mentions in the episode here.
For more interesting discussions, make sure to tune into our next episode of Digital Marketing Answered™, and don’t forget to like, subscribe, and comment on our YouTube channel.