IMPACT Inbound Marketing Agency]
Services
TAYA

They Ask, You Answer Mastery

A coaching & training program that drives unmatched sales & marketing results.

Sales

Sales Performance Mastery

Improve the competencies and close rates of your sales organization.

Web design

Website Mastery

Web design, development & training for your team.

HubSpot

HubSpot Mastery

Everything you need to get the most from HubSpot.

AI Mastery

AI Enablement Mastery

Unlock the power of AI in all aspects of your revenue operations.

Discover how IMPACT’s services can help take your business to the next level. Book a free 30-minute coaching session Book a free 30-minute coaching session
Learning Center
Learning Center

Learning Center

Free resources to help you improve the way you market, sell and grow your business.

[NEW] The Endless Customers Podcast is now available everywhere. Learn how to earn trust & win more customers in the age of AI. Listen Now Listen Now

Free Assessment: How does your sales & marketing measure up?

Close Bottom Left Popup Offer

Free Assessment:

How does your sales & marketing measure up?
Take this free, 5-minute assessment and learn what you can start doing today to boost traffic, leads, and sales.
Marcus Sheridan

By Marcus Sheridan

Sep 1, 2023

Topics:

Sales & Marketing Technology Data Security Artificial Intelligence
Subscribe

Never miss an episode of Endless Customers!

Subscribe now and get the latest podcast releases delivered straight to your inbox.

Thanks, stay tuned for our upcoming episodes.
Sales & Marketing Technology  |   Data Security  |   Artificial Intelligence

The AI Best Practice Guidelines Your Company Needs

Marcus Sheridan

By Marcus Sheridan

Sep 1, 2023

The AI Best Practice Guidelines Your Company Needs

As the AI wave keeps building, I’m talking to business leaders every day and asking the same question: Do you have an AI policy at your company?

Overwhelmingly, the answer is no. 

And, as you might guess, the question I get in response is, Marcus, SHOULD we have one?

I think most businesses don’t need a public-facing AI policy. After all, AI is evolving so quickly, and if you plant your flag and state your policy, you run the risk of having to backtrack. 

What they need are guidelines that encourage their workers to explore new tools responsibly in a way that keeps company values front and center. 

Internal guidelines vs. public-facing policy

Let’s start by defining some terms. I think of a policy as a public-facing document that explains your stance on something. In this case, think of a disclaimer on your website that says “We do not use AI to generate our content.”

Some companies are doing this already.

If you put this disclaimer on your website, will visitors be more trusting of your content? More attuned to your point of view? Some will, but I doubt it will make too much of a difference.

AI-purpose-of-content

People want quality. They want authenticity. They don’t really care how they get it. We have no issue with computer-generated effects in our movies and our music. We have no problem with Photoshop. If a human writes a social post and a machine corrects for a missing word, is the output human-generated? If a human prompts a program, then changes some of the output, is that output AI-generated? Where’s the line — and does it matter?

The goal of your content is to make it easier for people to find you, trust you, and buy from you. AI can help you do this. And if you keep your mission in mind, the tools you use don’t really matter. 

Your company’s AI guidelines

By contrast, I think your company should have internal guidelines around the use of AI so all of your employees are on the same page about what’s expected. The difference is that this is not a public-facing doc that paints everything in black and white.

AI-guidelines

These should be a balance of general best practices with some hard-and-fast non-negotiable rules. 

Here’s what I’d suggest is in it:

If you use generative AI, these things must be true

  • Everything is edited by a human before it’s published. This is an absolute must for me, even with the best tools. You need real human eyeballs on anything that’s going out to make sure it sounds like you and fits with your goals. You want to stand behind anything you say, but it’s hard to do that if it was composed by an LLM.
  • We always fact-check every claim we make. AI tools “hallucinate.” This means they can make up facts — but they can also make up the sources of those facts. Scientists are not sure why. ChatGPT can cite research that never happened, reference books that don’t exist, and court cases that are totally phony. It’s up to you and your team to verify everything. This means rigorous fact-checking and a skeptic’s mindset.
  • We run a plagiarism checker to make sure our AI didn’t steal anyone’s work. This connects to the first two. Some tools have plagiarism checkers built in. Many do not. There are independent tools like DupliChecker and Plagiarism Detector where you can paste in text and get a percentage score for how much is plagiarized. Your standards for plagiarized work should be extremely stringent. You don’t want a machine to steal someone’s work and you end up passing it off as your own.
  • We pay attention to potential biases in anything AI-generated. This one is harder to check for, I know, but it’s good to have in the back of your mind. Remember, it’s great to share the opinions and insights of your team. It’s bad to inadvertently put out biased content you can’t back up.
  • We always keep our goal (and our audience) in mind. The goal of any content should be to educate buyers and build trust with your audience. AI can help you do that faster, but it’s just a tool. If it’s misused, it will hurt you more than help you.

For any AI that you use, keep these best practices in mind:

  • Evaluate any tool before you use it. New tools are emerging at an overwhelming rate. Not all are trustworthy. Not all will be around in six months You should test and evaluate any tool before using it for work. If you’re in doubt about it for any reason, don’t use it. There’s likely an alternative. 
  • Protect company and client data at all times. ChatGPT and other tools are not secure places to put sensitive data — something certain businesses have found out the hard way. Be extremely careful about what you input, especially when it comes to financial data, employee evaluations, and other such material.

To me, this should cover the basics. But once your guidelines are shared with the team, review them once a quarter to make sure they’re still accurate — and then re-share them with the entire company.

Why guidelines are better than a policy

In most cases, policies are absolute. They say “we never do this” or “we always do that.” However, guidelines are prompts that help us bring our best judgment to bear, and in the age of AI, our judgment matters more than ever.

I think most of the black-and-white AI policies coming out today will seem laughable in a few years. This technology will revolutionize our lives in ways we can’t yet understand. 

AI-judgment

A better practice is to create a set of general guidelines to be used internally. These are not so hard and fast that they’ll be out of date in a year. They are principles we should remember when we use any technology, AI or otherwise. 

Make sure everyone is clear about these guidelines. Review them at a company all-hands — or in team huddle meetings. 

The world is changing, friends. We ride the wave by holding true to the values that are in our company DNA, and the principles that got us here. 

Free Assessment:

How does your sales & marketing measure up?
Take this free, 5-minute assessment and learn what you can start doing today to boost traffic, leads, and sales.