In this Episode
- [05:00]Sam Richter advises always checking sources for fact work and using tools like perplexity.ai for credible data
- [10:05]Sam suggests using AI tools to write detailed prompts and reduce hallucinations.
- [18:10]Sam explains how to create custom GPTs in ChatGPT, Google, and Copilot to automate repetitive tasks and improve efficiency.
- [31:27]Sam shares case studies of how his clients have used AI to solve complex business problems, such as finding optimal warehouse locations and analyzing sales data.
- [34:40]Sam outlines his mindset shift framework for using AI, which includes focusing strategically, simplifying workflow, harnessing expertise, innovating, and thriving as a human.
- [37:41]Sam explains the use of Canvas and projects in ChatGPT to organize and save prompts for future reference.
- [41:51]Sam provides examples of how agents can be used to make business decisions and automate processes, with a focus on the potential and current limitations of this technology.
Sam, it’s so great to have you on the show.
Thank you very much. It’s really an honor to be with you.
We did an interview on Marketing Speak earlier this year, which was awesome, and you shared some great wisdom and I wanted to have you back on but on this show to talk about AI because I heard you speak on a virtual summit that we were both speakers on and you gave such a great talk about AI—how to get the most out of it and how to be strategic in your use of it.
I just thought this was really important to share, and it’s not just for marketers, it’s for everybody. Let’s start with how you got so skilled at AI. How did you build that as a core competency?
We first need to define AI versus generative AI because they’re really two different things. Generative AI is a subset of AI. Traditional AI has been around for a long time, maybe 30 years or more. It’s machine learning.
For example, the easiest way that people can think of is like a Roomba vacuum cleaner. A Roomba vacuum cleaner is programmed the same way. But your Roomba vacuum cleaner will eventually be different than mine because your house is different. It learns as it vacuums. It learns where your walls are your tables. When it bumps into something, it stores that in its memory.
That’s AI, machine learning. That’s been around in manufacturing, Amazon plants, distribution, machine learning, forklifts, you name it. That’s been around for a while. I started building AI machine learning software probably 20 years ago in my sales work. I do a lot of work in sales and primarily in sales insight.
Generative AI is a subset of AI.
How do you learn more about other people so you can be really relevant in a phone call, a meeting, or a Zoom presentation? I started building AI-powered search applications that made much of what I shared easier to implement. Generative AI launched in November 2022.
It was just a natural progression to kind of sit down and figure it out. That’s kind of how I do it. How do I do it? I don’t know. I just sit down and figure it out. I guess I grew up in an era where, when you got Legos, it was just a box of Legos. It was up to you to figure out what the Starship looked like, not follow any instructions. That’s kind of how I’ve always done things. But generative AI is a little bit different.
Generative AI has some aspects of machine learning, of course, but really, it’s contextual learning. It answers, like search engines, it basically looks inside a database to find out where the words you’re searching for appear on which websites. Generative AI actually generates, hence generative AI, generates something brand new based on an input. It’s a probability-and-prediction model, much more sophisticated than machine learning, but it’s completely different from traditional AI.
What are some of the most important considerations when someone is using generative AI, and they want to get a fact-checked or at least a non-hallucinated output, and they want to get all the answers and not just a partial answer? How do they prompt effectively?
I’ll start with the conclusion first. The conclusion is that it doesn’t matter, especially if you’re doing fact work; make sure you’re always checking your sources. For fact work, I’d probably use Perplexity. It’s perplexity.ai, trained to do a really good job of searching the internet, and they always cite their sources. If you see a piece of data, click it and make sure it’s from a credible source. Always remember that you know more than AI, your wisdom, and your experience in general. If you get a result and it just doesn’t feel right, it’s probably not right. You can go back and ask it, “Hey, where’d you get your sources from? Or can you share a direct link to the data you’re citing?” Always rely on your gut, and double- and triple-check your sources if you’re writing for work or school.
Generative AI generates something new from an input. It’s a probability-and-prediction model, much more sophisticated than machine learning.
Are you trying to get a medical diagnosis?
Yeah. Let’s go back to how you write a good prompt to decrease—I don’t think we can ever eliminate hallucinations—but we can certainly decrease hallucinations. The first thing is you have to give, I call it an ‘intern’. Your generative AI system, whichever one you’re using, is an intern because you have to treat it like a person. You talk to it, and you have to tell it, “Who is it today?” It’s called a ‘persona’.
For example, if I’m asking for medical information, I might say, “Today, you are the world’s premier doctor for melanoma, based out of the Mayo Clinic. You’ve seen more than 100,000 patients over your career.” You have to tell it who it is.
If I’m asking it to review a legal document, I might say, “Today, you are an intellectual property attorney specifically working with professional speakers on non-compete agreements.” There’s something like that. You have to tell it who it is. You need a framework. The framework that I follow is one I developed. It’s called the GUIDE framework. It’s pretty simple. There are lots of other frameworks, and some of them get very technical. But for general search or general usage, the GUIDE framework will work just great.
G is a goal. You have to tell it who it is. “Today, you are an expert doctor from the Mayo Clinic.” That’s a personal goal. “I’m going to upload some information from my chart. I would like you to review the information and explain it to me as if I were a sixth grader.”
Then the next one in the GUIDE is U, which is us. Who are you? Who are we? “I am a 50-year-old male, just went back to the doctor, traditionally healthy, but came back with some concerns on my chart.”
Then you have to give it to its intended audience. In this example, it’s me. I’m the intended audience.

How do you want your information delivered? That’s the D. “I would like my information delivered in a table with bullet points, bold headers, subheads, and explained to me like I’m a sixth-grader.”
E is the environment in which it will be received. “I’ll be the one reviewing this information.” Versus if I were preparing a plan, I might say, “I’m going to be sharing this plan with the CEO of our company.
The GUIDE framework: What’s the goal? Who are we? Who’s the intended audience? How do you want your information delivered? What’s the environment it’s going to be received in?
The final, most important thing you must do on every single prompt is end your prompt with, “Do you have any questions?” If you don’t do that, generative AI will make a guess. Sometimes a guesser is wrong. It’s a hallucination. Generative AI is really a probability and prediction engine. It’s just making a prediction on what you’re looking for based on your inputs, and what’s the probability that the answer it delivers is correct.
It’s never going to tell you, “Hey, I guessed on this one.” You have to give permission to ask questions. Now the systems, all of them, whichever one you’re using, are getting better at asking questions. But I still recommend you always put in after every single prompt: “Do you have any questions, or do you need further clarification? Was I clear? Did I make a fence?” When you do that, you won’t eliminate hallucinations, but you’ll certainly decrease them.
The way I think about it was guided with a question mark at the end. If you could point out some of the common mistakes that we didn’t cover through the guide framework that people are making when they use generative AI, what would those additional kinds of mistakes and mishaps be?
Your generative AI system is an intern because you have to treat it like a person. You talk to it, and you have to give it a persona.
Not providing enough detail. If you don’t provide details, the systems will make a guess. Oftentimes, it will guess wrong. Now, that can be annoying, writing details. It can be really hard. The other things that you think about, make sure that you’re iterative. This is not Google, where you enter a piece of information and expect a result. It’s iterative.
If something doesn’t make sense to you, if you want further clarification, ask for it. Talk to it. Make sure you’re talking to it. Here’s just a little trick. If you’re really unsure which prompt you want to use, you can go into ChatGPT and say, “Give it a persona, you are the world’s greatest prompt writer for ChatGPT.”
I am going to input a prompt asking for information to be explained to me like a sixth-grader, based on the information in my uploaded chart. “Please write a very detailed prompt that provides accurate information without hallucinations. Let me know if you have any questions.” Let ChatGPT write the very detailed prompt. You’ll see that, whether you’re using ChatGPT, Copilot, or whatever, the tool will write a much more detailed prompt than most humans would ever think to write.
That’s a great tip, and you alluded to it already, but I want us to really double-click on it. That is, in the case of looking for sources or facts, you recommended Perplexity. You just mentioned, if you’re using Copilot or using ChatGPT, which LLMs are the most effective for particular use cases, and which cases would you recommend Claude, Grok, Copilot or whatever you know, so that I think is going to be really helpful for a listener.
I think that the tough part is that the answer changes every day. The nice thing about Copilot is that it’s embedded in the Microsoft infrastructure. The promise of Copilot is that you have a spreadsheet and can literally put on a headset to talk to it. Add another column here. What do these numbers mean? Add this up, change this, change that. Even turned this spreadsheet into an animated PowerPoint presentation. The ultimate with Copilot will eventually be when we open up our Outlook on Monday morning, and it says to us, “Hey, you received 74 emails over the weekend. I took the liberty of answering 72 of them.” We’re not there on any of those yet. But I think that’s the promise of Copilot. Hopefully, Microsoft will make that happen.
Google with Gemini is in a similar boat because if you’re using the Google Workspace and Google Mail and so forth, all your history in terms of emails and everything is on Google instead of Microsoft in that case. Hopefully, Gemini is going to do a good job of that.
Gemini already does a pretty good job of reading your Gmails and your Google Calendar. It already does; that’s really called ‘more agentic systems’. But Google is already doing a pretty good job of that. Again, it’s not to the point where it just automatically answers all your emails, but it’s getting really close.
Generative AI is a probability and prediction engine. It’s making a prediction on what you’re looking for based on your inputs.
The other reason to use Copilot is that you’re forced to—because many companies are worried about security. From a company perspective, they block all other generative AI systems. The only one that many organizations allow their employees to use is Copilot due to security concerns. Because again, Copilot is part of your Microsoft 365 license. It’s covered under your terms and conditions, your privacy.
As you said, Gemini is really good. Now, some of these have launched new models. Gemini 3.0 just came out, which is really, really good. I’ve always said that Claude is great at writing. But Claude might be the best one now. Because Claude 4.5 launched yesterday. I haven’t played around with it too much, but the reviews I’m reading are that it’s incredible. Some of them have these standardized tests, I guess, that they give different models when they come out, and Claude’s blowing through. It’s supposedly really good. I have also used Claude 3.5 for software development. I’m really excited to see what 4.5 can do.
Let’s go a little bit deeper into Claude’s code, because that’s something people might kind of brush off because they’re not coders and they’ve never needed to code anything, or so they think. They’re not even trying it out. They’re not even playing with it. What would be the reason why they should check out and experiment with Claude’s code?
This is my opinion. Unless you’ve done some coding before, it’ll get overwhelming really quickly. Now, the good news is it’s not rocket science. Go to YouTube, learn basic HTML coding, and learn how to upload something to a server. Once you know how to do that, then you can go in and just talk to it. You want to build a piece of software, you might go in and just say, “Hey, I’m thinking about creating some software that will auto-reply to emails.”
I use Outlook, whatever you want. It will build the code for you and walk you through it. Now again, that would be one where I would say, “Explain this to me like I’m a sixth grader.” That’s what I do a lot. All of them, when you’re writing code, tend to throw up. What do I mean by that? They just give you a lot of stuff. I always recommend that when you’re doing coding, at the end, say, “Do you have any questions and you’re doing when you talk about the delivery.”
Today, generative AI lacks compassion, empathy, wisdom based on experience, courage, and accountability. It can't mentor. It can't build trust. That's what YOU do. Share on XDeliver this step-by-step, one step at a time. Now, Gemini is also really good at coding, and so is ChatGPT. I wish I could tell you I have a favorite. Sometimes I use Gemini, sometimes I use Claude. It can be because one is working faster than the other, or because I’m uploading some code to the server. It’s not working the way I want. I’ll go try in Claude. Claude’s getting it right today, I’ll use Claude. I wish I could tell you I personally have a favorite. I don’t. Lately, I’ve been using Claude. A little bit of Gemini, although I’ve found Gemini to sometimes get lost in its own code, for lack of a better term. But as I said, I’m excited to use 4.5.
What about Grok? Do you play with that at all?
Nope, I don’t play with Grok. I played with it. I’ve used it. The nice thing about Grok is that it doesn’t really have filters as the others do. There’s certainly some filters in there, but I don’t use Grok.
It’s less locked down. Would you recommend that people become at least somewhat comfortable with multiple LLMs rather than always relying on ChatGPT?
I think so. ChatGPT’s going to be great. We can talk about custom GPTs in a little bit; it’s got some more advanced features. Well, all of them have the same advanced features. Think ChatGPTs are better, more of them. It’s great for everyday usage. But again, Claude seems to do a better job with writing, Gemini, coding, and math. Again, try them. Perplexity, certainly with research.
All of them have a tool called Deep Research. Again, I like Perplexity’s Deep Research. What’s Deep Research? I like to say it’s kind of like having a Harvard PhD researcher on your team. I mean, not really, but it’s probably 60-70%. It’s darn good. I’ll use Perplexity for that. The bottom line is that, for each one, I always recommend the professional version so you don’t face as many limitations. The professional versions cost $20 a month. I probably have 10 subscriptions, so I pay $200 a month. Everyone’s like, “That’s crazy.” Hey, if you’re not getting $200 of return on your investment in the first five minutes of using these tools, you’re not using them correctly.

If you’re spending, let’s say, $100 on Hulu, Netflix, and all that, it’s the ROI on that. Let’s talk about custom GPTs and how to really customize your experience with ChatGPT.
I think all three can do this, but I use it in ChatGPT, Google, and Copilot. Copilot called agents or custom agents. Google, they’re called gems. In ChatGPT, it’s called custom GPTs. Earlier, when we were discussing writing a framework, we talked about a persona. Think about the ability to actually have multiple personas. On each one of those, you can pretty much pre-program who you are, who your intended audience is, how you want your information delivered, the environment and what it’s going to receive. You only really have to type in your goal.
Before we talk about custom GPTs, the first thing everyone should do is use custom instructions, which is kind of a custom GPT. For all of these models, you can click settings. It’s usually under a personalization area section where you can pre-program ChatGPT, Google, Copilot, whatever one you’re using and tell it who you are. Basically, you can pre-program the U, I, D, and E in the GUIDE process, and even the P, the persona, but really the U, I, D, and E.
When you use ChatGPT, it knows who you are. The nice thing is, you can turn off custom instructions at any time. For example, I have custom instructions in ChatGPT. It knows I am a professional speaker and has a lot of details about that. When I’m using ChatGPT, it will always answer my questions through that lens. Now, if I want to write a love letter to my wife from ChatGPT, I’ll go into custom instructions and turn that off for that particular prompt because I don’t want my wife to get a love letter from the lens of a professional speaker. I’d like to see it through her husband’s lens. That’s the first.
You should probably write that from your head and your heart, not from an AI.
You have to treat generative AI like a person. You talk to it, and you have to tell it who it is. You have to give it a persona. Share on XSometimes, you just need a little outline to get you started. Well, it depends on how badly I screwed up. That’s not technically a custom GPT, but it’s really important that you can go in and customize all these tools under personalization, kind of make it your own, and then it will always answer your questions through that lens.
Now, a custom GPT, let’s just talk about ChatGPT. In the left-side navigation, there’ll be a small link. They changed the name, but I think today it’s “Explore GPTs,” or it might just say “Explore,” and you click it. And there are two really important portions of it in chat. GPT, the explore. I call it the jet chat. GPT store. I’m not sure if it’s technically called that, but I do. There are tens of thousands of people who have built custom GPTs, most of them free for the rest of us to use. Think of it like the iTunes, Apple, or Android store, where you can go in and get an app for just about anything. Just search it just like you would the iTunes store, type in email or PowerPoint, Excel or data or whatever you want. Someone’s built a custom GPT, most likely. You can try it out if you like; it automatically stays in your GPT library.
Now, the other part of the Explorer GPTs is that once you’re in that section in the upper-right corner, you’ll see a button that says “Create a GPT.” They click on that, and it opens up an interface, and you can basically build your own piece of software just by typing. I like to say, “Anything you do in your business that’s repetitive, create a custom GPT for it.” It opens up this interface. You talk to it, just type in what your goal is, and it will build a version of it for you, test it, it won’t be right the first time, go back in and say, “Hey, here are the results I got, this isn’t exactly what I’m looking for, can you please play around with it?”
Eventually, you’ll get a really good piece of software. Now you can share that just with yourself, share it with people on your team who have ChatGPT, or add it to the ChatGPT store. I’ll give you an example. That GUIDE framework we were discussing earlier. In my keynote presentation, I have about 12 different PowerPoint slides that describe the GUIDE framework. They’re all highly customized for the audience. If I’m speaking to the plastics industry, all of it’s built on, if you’re in the plastics industry, as a financial advisor.
It was taking me about 25 minutes per presentation to craft those 12 slides. Anything that’s repetitive, automate it. It’s the same 12 slides. It just varies by industry. I went in, and I created Sam’s PowerPoint presentation creator. That’s what I call it. You won’t find it in the GPT store because it’s only for me. But basically, now when I give a presentation, all I do is type in the industry, and ChatGPT builds the 12 slides for me. It’s gone from about 25 minutes per presentation to less than five minutes per presentation, and that includes the copying and pasting.

Well, okay, it saved 20 minutes. I give 100 presentations a year. That’s 2,000 minutes per year, which is almost a week’s worth of work. As I like to tell people, you’ve got 52 weeks in a year; I’ve got 53, because I just gave myself a free week. Anything that’s repetitive, just go in there and play with it. It’s easy. If you get stuck, just go to YouTube and type in “How do you create a custom GPT?” Or go to ChatGPT and say, “Walk me through how to create a custom GPT. Explain it to me like I’m in sixth grade.”
What are some of your favorite GPTs, custom GPTs? I’ll give you one as an example. GPT Oracle, Marino De la Cruz wrote this one; it’s a prompt creator. You give it a half-baked prompt or a non-optimized prompt, and it will turn it into something quite detailed and consider all the different things that need to be incorporated.
That’s beautiful. What’s going on in that particular prompt? We just talked about that a few minutes ago. Where we said, “Can go into ChatGPT and type in, you got to give it a persona, you got to give it a goal, you got to tell it, you’re the world’s best prompt writer.” But this Oracle one is basically: there’s probably a 500-word or 300-word prompt already sitting underneath this custom GPT. This Oracle product, your custom GPT, you’re talking about,
You just go in, I’m assuming you can just type in a simple sentence, not even be grammatically correct, probably won’t even make sense. It will print or deliver a prompt for you that you can then copy and paste. That’s what a custom GPT is; that’s a great example of one. I have a number of them. The ones I think are very specific to my needs as a speaker. But I think one that everyone would like—there are two of them.
One is called DataAnalyst, and the other is called DataAnalytica. I wish I could tell you how they differ, because I use both. But they’re really great at analyzing spreadsheets, analyzing your own data, anonymizing, take your financial statements, anonymize them, take anything identifiable off of it, upload your financial statements and as I said, you can put on a headphone and talk to the data.
How do you get more strategic in your use of AI, instead of just, like, shaving some minutes off these repetitive tasks I do? What if I want to get marketing plans, business plans, like life plans, relationship plans, and things like that that will really help me think more laterally and in the big picture and kind of see around corners where I just couldn’t have thought of these on my own.
The system prompt is the detailed description of the goal and how you want your information delivered.
Those are great questions. That’s exactly the use of these tools. I think it’s really strategic, first, as you said, write down all the things that are repetitive. That just creates efficiency. The next level is really like you just described strategically. What do you need help with? For example, a marketing plan would be a great idea. Everyone, you just described that there are custom GPTs for all of those. I have some of my own that I built for some of the members of my products.
But there are definitely marketing plan GPTs, business plan GPTs, and you start with one of those. Be very specific and detailed. Who are your competitors? One thing: maybe upload your current strategic plan and marketing plan. Just be very simple. Give it a prompt: “You are the world’s best marketing plan generator. I’m going to upload my existing marketing plan. Please find below the names of our top 10 competitors. Here’s a description of what we’re trying to achieve. Holes in my marketing plan. Tell me where I might improve it.”
Now, when you’re doing things like business and marketing planning, also recommend adding some text to your prompt along the lines of, “Be brutally honest with me. You won’t hurt my feelings.” Because I found that a lot of these GPTs try to be nice to you. You don’t really need that when you’re writing your business plan, your marketing plan. You say to it, “Poke holes in here. Tell me where I’m wrong. Do not accept any of my assumptions.” Put that kind of language in there.
That can actually be part of your custom instructions, “I don’t want you to be a sycophant or a yes man. I don’t want you to placate me. Tell me how it is. Also, if you don’t know the answer, don’t make one up. Tell me, ‘I don’t know.’”
I’m glad you brought that up. On some of those types of prompts, always put in, I call it audit instructions. Certainly, all of my custom GPTs have audit instructions. An audit instruction is simply that the bottom of the last thing I put in the prompt is that it’ll say, “Your audit instructions are to ensure that—if you’re going to cite a source—it must come from at least two credible and objective sources, where you’ve hyperlinked the source. Make sure you’re not assuming anything. You must ask me questions if you need more information.”
That’s why I call it an intern. If an intern came into your office, you’d give that person these instructions. Treat it the same way.
Great advice. Can you elaborate on the two types of prompts that people may have heard of, our listener, system prompts and master prompts? What is a system prompt? What is a master prompt? Why would our listener want to use either or both of these?
In my opinion, I’m not finding much difference between the two, but it might just be the way I’m writing the prompts. But traditionally, system prompts, as I understand them, are when you’re getting it, when you want it to achieve a very specific goal related to something that you’re doing, a piece of software, something you’re working on.
A master prompt is more of a prompt, almost like the custom instructions type prompt. That’s how I look at them. How do you look at them?
I watched a video from Dan Martell. He’s written some great books and articles on productivity and has some cutting-edge videos on AI. System prompts are specific to use cases. That’s his kind of formula or worldview around the difference.
It’s kind of what aligns with my understanding of those, as well as the master prompts. Again, there are more types of prompts you might put in your custom instructions or even custom GPTs for the system prompt. Where the custom GPTs or the custom instructions might not have very much detail about the goal. That might be something individual each time, versus the system prompt is very, very specific. Like, “I’m looking to solve this problem. I need help with this specific task.”
I think, again, not having seen his work, read his books, or watched his videos, a good prompt combines both. The master prompt is, if I were to break that down again, maybe Dan would say differently. But the master prompt needs to be in there. The system prompt is the detailed description of the goal and how you want your information delivered.
What would be some case study examples of your clients or people in your audience who maybe followed up with you, told you they got a game-changing output or outcome because of a particular prompt that they learned how to do because of your training?
I think it’s a fun one. I had a client who was expanding into Texas. They wanted to know where to locate their new facility to maximize just-in-time delivery. They took a spreadsheet. They went into the ERP system. They output a spreadsheet and removed any identifiable information. But it was basically all the units and parts sold by zip code in Texas over the past 12 months. They created a fairly long prompt, but it was basically the persona. “You are an expert in logistics, warehouse management, and just-in-time delivery…”
Then they explained who they are and their goal. “I’m going to upload a list of products sold in Texas by zip code. Please give me your best opinion on where we should try to find a new warehouse.” The delivery was, “Please deliver your results with a specific longitude and latitude.” Five seconds later, it returned the exact longitude and latitude of the warehouse.
Then the next obvious question is, “Was there any?” Then they went in and said, “Great, is there any? Please search the internet to see if there is any warehouse space for lease with a minimum of 20,000 square feet within five miles of the optimal location.” It came back with three results. Now, are they going to base their company’s future on that longitude and latitude? Of course not, but you then have somebody with experience in logistics look at it, and they told me, “It’s actually the right place. That’s exactly where it should be.” That’s an example.
Until we get to singularity, which probably is not going to happen in our lifetime, today, generative AI does not have compassion or empathy.
Another one was working with a company; it was a retail store that sold games, and I had the same number of staff. I don’t remember the number, but let’s call it: they had four employees working every seven days, four employees every day. I wanted to know if that was the optimal staffing. We took their sales data, anonymized it, and it was all based on this, which was basically each product sold by date.
February 3, January 17, by date. This one was done in Copilot because we wanted to put on a headset and talk to the Excel spreadsheet. The first thing they did was say, “Please add a column converting the date to the day of the week.” Now all of a sudden, instead of having May 13, June 14, it had Monday, Tuesday, Wednesday, Thursday, Friday. Now it was a huge spreadsheet. The next one was, “What’s our most profitable day of the week? Please deliver your results with a bar graph and more details.”
Then I said, “Wednesday is the most profitable day of the week.” They took some of their core products, and they said, “What’s our most profitable day by product?” What was really interesting was that, for one specific product, Sunday was the most profitable day. Let’s pretend that it was four employees, I can’t recall. They said, “Okay, we’re gonna put five employees on Wednesday, because that’s our most profitable day. We’ll only have three employees on Sunday, but they’ll be experts in that one particular product.”
I have a framework for generative AI, which I call the mindset shift. F in the mindset shift is focused strategically, which is basically, how do you use these tools to make business decisions that were maybe previously impossible because you didn’t have access to the data? Could some smart accountant and somebody who knows how to do pivot tables in Excel do that manually? Of course, they could have. I wouldn’t have known how to do that. But our friend Copilot was able to do it in that specific example.
Can you elaborate on the rest of your Mindset Shift framework?
S is simplifying the workflow. That’s how most people use ChatGPT or Generative AI today. “Help me write an email. Give me an idea for a blog post.”
H is harnessing your expertise, which is what we’ve been talking about as it relates to custom GPTs. There are obviously third-party tools. There are third-party tools for all of these.
I is for innovation. How do you do something? How can you be more creative, do things that were previously impossible or way too expensive?
F is focused strategically, all leading to T, which is to thrive as a human. We have to embrace this technology like a new intern. This isn’t the new iPhone we’re going to test, and you say, “Hey, if you use it, great. If you don’t, it’s not the end of the world.” This is a new thing I like to say: when historians look back 500 years from now, there’ll be three transformative technologies that change the course of human history.
Individual agents, or, as some people might call them, bots, are actually talking to each other and making business decisions without human involvement.
The wheel, the printing press, electricity, the internet, and generative AI. You have to think about it that way. You have to have that mindset shift. When you embrace it, the goal is to, I like to say, “Wave a magic wand. If I could give you five extra hours a week, or 10, maybe 20, depending on your job, what would you do with them?” The answer is, well, what you’re going to do with it is you’re going to do what only humans can uniquely do, and that’s thrive as a human.
Because today, until we get to singularity, which probably is not going to happen in our lifetime, today, generative AI does not have compassion or empathy. It can’t connect with other people. It can’t build trust. It doesn’t have your wisdom based on your experience. It has wisdom, but not based on your experience. It doesn’t have courage. It doesn’t have accountability. It can’t mentor. It can do an okay job of asking the question that the person doesn’t even know needs asking, but not as well as a human.
That’s really the mindset shift: how do you use the S, H, I, and F to create efficiency, allowing you to focus your time on the T, which is what you and only you can uniquely do.
Let’s talk about some additional features our listeners or viewers might want to be aware of: Canvas and ChatGPT. Why would somebody want to use Canvas, and can they edit that canvas and have the AI learn from those edits? How does that work?
I don’t use Canvas a lot. I typically use third-party tools. I’m certainly not an expert in using Canvas. But there are lots of different features. When you’re in any of these tools, typically in the search bar, I call it not the search, but the input field. That’s the phrase I was looking for. In the input field, there’s usually a little triangle on the left-hand side. You can click on that. All of these GPT systems, LLMs, large language models, they’re all adding new features all the time. Search the web, Canva, coding, and deep research. Agents, we haven’t even talked about that, really, agents and projects. I really haven’t used it; I don’t use Canvas much. I’m probably the bad one to ask on that one.
How about projects?
What’s your opinion of using it, because it sounds like you’ve used it?
This is something again that when I watched that video from Dan Martell, he was raving about using the canvas feature and making edits within the output from ChatGPT and then having it learn from those edits and say, “Okay, I just edited the first paragraph, now apply those same kind of edits and corrections to the rest of the document.”
I will do that a lot with Gemini. Gemini and Claude, especially in the coding, is where I use that a lot, which basically opens up two windows. You have your ChatGPT window over here, and your other window over on the right, where you can actually see the edits as they’re being written. For example, if I’m writing code, you can actually see what it’s building as it builds, and you’re right, it has it memorized. I guess it’s probably the same. I think it’s the same type of feature that I use in Claude and Gemini.
Let’s talk about projects. Why would you want to organize your business, your life, and your activities into projects in LLM?
Great question. It’s one of my favorite features in ChatGPT, and all of them have this. But you know how in your computer you store things in folders? That’s basically what a project is. If I’m working on something for my presentation, and it’s something that I’ve automated, and I said, “Hey, I’m gonna use this in another presentation.” At the upper right-hand corner, there are three dots, and you can click on those, and you can add that to a project.
I have a project or a folder called ‘Presentations’. Another one where I might be doing software code. I’ll have one called ‘Software Code’. It lets you save a specific prompt in a folder for future reference. Now what’s nice about that is then I’ll go back to that. When I open up the folder, I click on the prompt. It can be two months later, but it’s like I was using it 30 seconds ago because I can just go in, and it continues the flow of the conversation I was having. It’s a little bit different than a custom GPT because, with a custom GPT, you’re kind of starting from scratch each time, whereas with a project, you’re building on what you’ve already built. It’s saved as a Word document that you continually add to and learn from. By the way, we didn’t talk about the learning from the memory feature.
In the settings for all of these, you can turn on memory, and it will learn from your conversations. I keep mine off because I’m presenting in so many different industries. I’m always asking crazy questions from different industries. I keep it off because I don’t want it to think that I have multiple personality disorder. But for most people, leave it on because it’ll get really smart about you. You’ll learn how you write. It’ll learn your tone.
You can always go in and turn it off if you need to. You can always go in and delete your memories if you need to. But for most people, I recommend leaving memory on, because then, when you’re using projects or just any of the systems in general, it gets way smarter really quickly. That’s kind of the machine learning component that we were talking about earlier, AI versus generative AI.
You know more than AI. Your wisdom and experience in general—if you get a result and it just doesn't feel right, it's probably not right. Share on XWe didn’t really cover agents or agent mode. Let’s talk about that. How would somebody utilize agent mode in ChatGPT or whatever their favorite LLM is to make their life easier, do complex tasks, and reduce busy work?
Agentic AI is really the holy grail, the next step of AI. What that means is if you think of each AI as an individual agent that does a very specific task, it’s bolting those agents onto each other with instructions that they ultimately will start making business decisions on their own without human involvement. Now, I’ll say right out of the gate, the promise of agents today, from everything I’ve read and experienced, I put them in two buckets.
Agents today work really well if it’s a repetitive process every single time. Where they start to see higher error rates is when there’s nuance. Let me give you an example of both. Where agents work really well. You go into ChatGPT, you click on agents, and again, all of them have this kind of feature. But the first thing, you can link different programs together. For example, you can link your Gmail, your calendar and your to-do list.
I’m just making this up, and then you can program it and say, “Every morning, I want to see any emails that I missed from the day before that need an immediate response. Let me know my calendar for today, and are there any items in my to-do list due today?” Now that would be an agent that works really, really well because it’s repetitive. It’s the same thing every single time. Go in there and play, and just see all the different connections you can have. There are connections to so many different pieces of software, CRM systems, and email marketing systems.
Now, the other I call bucket B, of agents that a lot of companies, a lot of enterprises are implementing, but the business decisions, because there’s some nuance, we’re seeing too high error rates today. Give it a year, and that might be better. Let me give you an example of what a true agentic system might look like. Let’s say you develop an agent, a bot, a custom GPT, whatever it might be, that analyzes trends. All it does is run 24 hours a day, scanning Google News and whatever you tell it to look for to find specific trends or news items that might impact your business.

Ding, ding, ding, ding, ding, it’s three in the morning on Sunday, your news or your trend agent just identified something that’s relevant to your business. That automatically talks to your CRM agent and says, “Hey, here is something that just occurred. Which one of our customers does this impact?” Your CRM agent is smart enough to know, look at all the notes, analyze the company names, and analyze the job titles that are in your CRM system. It says, “We have 12 customers that this trend impacts. The CRM agent is smart enough to know to talk to your ERP agent.”
It passes that information on to the ERP agent and says, “Here’s an issue that’s come up in the world. Here are 12 of our customers who impact. Do we have any solutions? Do we have any products or solutions that solve that issue specifically for these 12 customers?” Your ERP agent independently says, “Yes, in fact, we do. The ERP agent automatically talks to the email agent,” crafts an email that describes the entire situation, and proactively says, “Here’s the solution we have.” Now, assuming your calendar agent, so you’ve got your own calendar, and then maybe that calendar agent even talks to your client’s calendars, it schedules a meeting for the next day, and your proposal agent actually crafts the entire proposal with pricing information.
Your sales rep, your account manager, shows up to work on Monday. They receive an email from your agentic system that says, “Here’s what happened last night at 3 in the morning. Here are the 12 customers who impact. Here’s the solution we have. I’ve already scheduled a meeting with each of our 12 customers. It’s on your and our customers’ calendars. Here’s the custom proposal for each one of our customers. That’s all possible today, technically.” Now, the problem with agents is, let’s just say, “Each individual agent has a 99.5 % accuracy rate, which is awesome.”
But the second you bolt on another one, and my math is not correct here, but let’s just say if that one also has 99.5, well then now you have a 1% error rate. You add another one, and now you have a 1.5% error rate. I’m sure my math is incorrect there. But you see where I’m going. When something is as nuanced as this impact on one of our customers, do we have a solution? An error rate of 3% or 4% will be unacceptable. Today, but that’s today. You’re getting way better, way smarter. But that’s the promise of agentic AI: individual agents, or, as some people might call them, bots, actually talking to each other and making business decisions without human involvement.
But just to simplify like a next action for our listener or viewer, let’s say, asking ChatGPT to put together a list of, of podcasts to get them booked on in a particular industry or niche, and you turn on agent mode, just like you turn on deep research mode, but you turn on agent mode, then it can actually go and keep fetching stuff off of the web, looking through all sorts of data repositories that are current, like up to the minute, looking for those podcasts that would be a good fit, and then actually reaching out and sending emails on your behalf if you’ve given it access to your, let’s say, Gmail account.
That’s correct. That’s a great use case example that will work really well because there’s not much nuance involved. If you’re doing a podcast and you think, like, “If I think I would be a great podcast guest for sales, that’s a great usage of agentic. Go find me a podcast for sales. When you find one, look for the, usually in those, there’s a contact information.” There’ll be a contact us page or a request to be a guest page. You can program that and then. As you said, have it connect to your Gmail. You have a template email that goes out, and it will do it automatically for you.
There’s so much more that we could go into. We didn’t have time to talk about the browsers like ChatGPT, Atlas and Comet from Perplexity. We didn’t talk about image and video generation. We didn’t talk about speaking to, let’s say, ChatGPT or your LLM and having it respond in a conversational mode while you’re driving to work.
Driving your car. I think the thing is, I know you would echo this, just go in and play around. We started the conversation like, “Sam, how did you get into this? Just went in and started playing around. I mean, is it super easy? No, it’s not super easy, but it’s not difficult either.” Go in and play around, and when you get stuck. The beautiful thing is you’ve got a tool that, when you get stuck, you can ask the tool what to do next.
One example, really nice, easy next action for our listener would be to download the ChatGPT app onto their phone and play around with using the voice mode to have a conversation, have it talk to you and you talk back to it and kind of let it interview you and turn that into an article or an outline for a talk or whatever it is.
It’s just that the beautiful thing is there’s a whole section of my presentation I call ‘creativity unleashed’, because any idea that you have, you can probably pull off now, and it’s all doable.
If our listener viewer wants to perhaps learn more from you, maybe watch one of your keynote presentations about this topic or maybe hire you for a custom presentation to their company, what would be a good next step for them, and where would they go?
I’m pretty Googleable, so you just type my name into Google. My website is samrichter.com.
Awesome. Well, thank you, Sam. And thank you, listener. Go out there and make it a great week, and do some good in the world. We’ll catch you in the next episode. I’m your host, Stephan Spencer, signing off.
Important Links
Connect with Sam Richter
Apps and Tools
Book
Business/Organization
People
Previous Marketing Speak Episode
YouTube Video






