top of page

AI in Marketing: Fun, Facts and Foibles

  • Writer: Trevor Stolber
    Trevor Stolber
  • Apr 6
  • 8 min read

Updated: Apr 11

In this article, I take a deep dive into some of my thoughts and posts on AI in general but with a bias toward marketing use cases.





AI technology is here to stay, lets look at how its shaping marketing today.

One of the quotes I love is;


"AI won't take your job but someone using it will"

If you ask certain people they will tell you no job is safe, and others will tell you that it is overblown nonsense.


Like all things as emotive as this, the answer likely lies somewhere in between, but where?




The Thing looks a lot like a Thnig but is it actually the Tihng?


The main issue with almost anything AI is the “thnig” it does looks a lot like the “thing” you asked it to do but really it’s a different “tihng” but still looks a lot like a “thnig”


Hopefully to those reading this you can see this a lot more clearly than I can, even with the infamous red squiggles that I try so hard to avoid (which ironically can be considered a form of AI and also something that has a significant impact on an AI generated score .... hmmmmm?).

What I mean here is that a Thing looks a lot like a Thnig when you read it, but they are different, one is correct and one is not. At a glance they look very similar.


Decaying feedback loop

Something that Stephan Bajaio and I talk a lot about at VibeLogic  is the decaying feedback loop of AI, something of an information death spiral if you will.


When LLMs are trained on data produced by LLMs you enter this decaying feedback loop.


Search engines have strived to provide “information gain” and have developed many techniques and algorithms to reward content that genuinely provides information gain. That is, something new to bring to the party. 


They have actively avoided any content that doesn’t really add to the general knowledge on a topic.


So, when the amount of content on the web is majority produced by generative AI from LLMs trained on that data what happens?



NOTE: Credit to Nick LeRoy for suggesting using Chat GPT to summarize text and concepts into images - which is exactly what I did here.


It becomes less effective (we are already seeing the plateau here) and then enters a decaying feedback loop.


This is why expert opinion and human insight are still very valuable, in fact they become more valuable as time goes on, and this is not immediately obvious.



Is Unique New?

Search engines strive for information gain - which is something new to add to the conversation!


LLMs are built using existing information, yet have been determined to be valid unique content.


The question is, is this unique content really new? 


Does it provide information gain? (more on information gain later)



Don’t be surprised when your AI makes something up

This is something that still surprises me to this day, people are surprised when AI generative text engines make something up .....


Generative AI. …. Generative …… the clue is in the name.

This is without doubt a revolutionary technology.


But they are designed to make things up.


Don’t be surprised when they do.



A real-world example of this I saw happening in front of me was when I was recently getting my hair cut. A girl in the shop was chatting with one of her friends about the potential of moving to a new city. She has asked the question in a search engine (I believe it was a Gemini response) about if her moving to a new specific state and city was the best possible state and city to move to. It confidently responded that the state and city in question was the best possible state and city combo and gave a bunch of information and resources as to why that was the case.

The problem here was intent and expectation matching.


The Gen AI side of it got the intent as provide information on why this potential new city and state is the best possible city and state to move to for her and her family. However the expectation was, advise me IF this city and state is the best palce to move her family to.

She had interpreted the answer explicitly, in that any other city and state combo would not have been as good an option, however if she had asked the same question the same way about any other city and state combination she likely would have gotten similar validations as to why they were the best choice.


I feel it important to note I was not eavesdropping on this conversation but it was had right next to me so it was hard to ignore, especially as I could see from the technology perspective where things were going wrong and that they hadn't been realized.

This is potentially where some of the biggest societal risks are and they are seemingly very minor. However, when people become reliant on this without checking and using critical thinking its a slippery slope.


AI Guard Rails

Let’s face it - AI is both very powerful and very dangerous.


To use it effectively while keeping pace with technology development, you need a partner that deeply understands the technology and can provide a framework with effective guard rails and governance.


We have all seen the exmaples of incorrect AI finding its way into legal texts, valuable and sensitive IP being exposed, information misinterpreted in a dangerous way, and I shudder to think how much vibe coding has introduced bogus ineffective functions into production code that pass a basic sniff test and even some unit tests.


NOTE: I have first hand experience of a function to detect the endieness of a system (do I get a super geek award for 1, knowing what that is and 2, for testing for it?). My function looked good, I ran it on a big-endian system and it gave me the correct response. I then ran it on a little-endian system and it told me it was a big-endian system. I was pretty sure I knew what the systems I was dealing with were - ARM vs Intel chips. However my function got it wrong. It looked good, it didn't fail, it gave a "valid" response. After a bit of debugging and a few back and forths with Chat GPT I asked it - does this function actually do anything? And it confidently responded, "oh no, this is just an example of how you might structure a function to check for the endieness of a system".

The crucial thing I did was check, I already had a test plan and knew what the outputs should be.


Check, thats it, just actualy check the output

The troubling thing is though, I am sure there are thoushands of similar examples like this that made it into production code because they generally "work" and seem like they do what was asked.


The Age of Agentic

It is truly the age of agentic. This is probably where the real power of AI technology lies.


Correct and appropriate implementation of AI agents can revolutionize your business.


We have done extensive research in this area.


AI agents can be very effective - but it’s not what you think.




It is not - let’s fire a whole team and replace them with AI employees.


What it is - is - let’s apply this technology and make our existing team much more efficient.


"AI won’t take your job, but someone using it will."

As someone who has spent a career focused on process, automation and generally loves efficiency this is really exciting to me. Getting the right agentic workflows and processes opens up a whole new level of productivity.


The Sniff Test

The “sniff test” is something we all do.

We instinctively look at something and give it a quick reflex sniff test.


Does it look and sound right?

History and experience has taught us to use and rely on our instincts of this sniff test.


However …


If AI passes the sniff test and that’s the only gate you have - that’s a problem.


You see the trouble with AI is it usually passes the sniff test, but that is the wrong bar to be apply to it.


The Trouble with AI is….

It makes people lazy 


Thats it


I can't be bothered to write anything more on the topic.















Perhaps I should go to chat GPT and say hey make me some content about the trouble with AI. And you see therein lies a problem.


It will spit out something that looks very credible but it very likely won't be aligned with your thoughts, data informed on the appropriate angles and validated by any reasonable method.


Most of my ramblings are .... well .... just that .... ramblings, but at least they are my ramblings straight form my brain, hopefully thats a good thing.

IF you are still reading this - bravo and thank you!

Critical Thinking is the New Super Power


In an age where seemingly everything is available almost at an instant people do not stop to check and think about what they have.


This makes critical thinking the new super power.

It is distinctly lacking in so many places.


So, stop for a moment, question something, check if that citation or stat is really valid.


You may well be surprised.


Don’t Trust, Verify, then Verify Again!

Trust then verify has been a mantra used by many.

However in an AI wold we beee to verify, then verify again and only then can we start to trust.



Information Gain with Gen AI

Information gain is something that Google talks about a lot. 


This basically means adding something new to the world, adding something significant and of value.


So we could argue that whilst Gen AI content may be technically new and arguably unique - because its basis is in what has already been produced it is hard to argue that Gen AI can provide information gain.


In a recent LinkedIn post I broached exactly this topic and there was some very valuable feedback - which its self could be considered information gain.
It showed that my perspective and thought process on what constitutes information gain was not the same as others. The concept of human currated AI content as information gain came up. That was an interesting concept to me.

Statistics

95.38% of all statistics can be misleading*

The trouble with statistics is they can be misleading.


But what happens when you bring Gen AI into the mix?


You see, Gen AI will typically give you a statistic if you ask for it.


Asking for a statistic on a general topic will result in an answer but it is highly unlikely to be appropriate or relevant and that is potentially more dangerous than it is useful.


In another LinkedIn post someone compared a Google SERP to a Perplexity SERP and argued that as perplexity had given a statistic and Google hadn't as a win for for Perplexity. However the question was so general and vague no useful matching statistic could be returned. In my view that was a win for Google regular organic rather than having an AI making something up.

* statistic made up for comic effect



You Still Need to THINK

In an AI driven world you still need to think!

There is a very real danger to humanity if we blindly let AI take over all tasks.


AI technology is amazing and has great utility with the potential to significantly enhance human productivity. 


However …….


If we let it blindly take over without thinking, without checking and without adding the crucial human element to its output we have begun to fail.



The Human is still Important 

So a useful and very important takeaway here is


The human is still important


Comments


bottom of page