How to Use Artificial Intelligence Safely and Fearlessly with Sound AI Ethics

Business Leader Considers AI Ethics

Artificial intelligence is only useful if it is used… and used responsibly. And the anxieties that accompany the excitement for AI tools cause some business leaders to keep them at arm’s length. But have no fear. Through simple data protections, better prompting to mitigate hallucinations and bias, as well as continued prioritization of the human touch, marketers can unlock the full potential of AI for their teams and clients.

By Jared Frank | 8 min read

The emergence of Generative AI and large language models (LLMs) has brought lots of exuberance, but also a lot of concerns for business leaders. Rest assured, while AI headlines are currently front and center, those concerns don’t necessarily carry greater weight than any other threat to your business… if you continue to protect the integrity and quality of your work with a strong value system, now supported by sound AI ethics.

Business owners must respect but need not fear AI risks, provided thoughtful governing principles are woven into your company culture and processes. As we use AI more and more to help make decisions for our companies and develop messaging for our customers, we must watch out for the current pitfalls associated with AI technology. Ignorance is not an excuse for AI mishaps. Any business can now use AI safely, responsibly, and ethically.

The good news is we now know many of the landmines to watch out for. When considering how to deploy AI in your small business, you must be mindful to protect data privacy, identify hallucinations, avoid reinforcing biases, and curtail negative social impact. In all use cases, AI is a tool that enhances, not replaces, the human touch. You, the small business owner, remain in control and accountable for all final decisions. Here are some AI ethics best practices to help make those decisions good ones.

Key Takeaways

By simply changing the default settings in your preferred LLM, marketers greatly reduce risks associated with data privacy.

LLMs hallucinate about 5% of the time, meaning the outputs used to make business decisions must be verified by a real person.

AI models inherit human biases, and those biases are then amplified by algorithms. Users can mitigate this pitfall with more detailed prompts and fact checking processes.

Protect Data Privacy with AI Ethics

For business leaders, data privacy, for both your company and your customers, is one of the most critical aspects of an AI ethics policy. The default settings for most LLMs permit them to store your data, train off the data you provide (including your IP), and adhere to fragile encryption standards that make them (and your data) vulnerable to breaches.

The simplest way to defend against unwanted data sharing is to toggle off the default setting that allows it. This tactic is easy enough to do in three clicks. If you run a small business, you can simply communicate to your team to do the same. In larger organizations, it can be harder to scale this accountability to all individual contributors. For larger companies with available budgets, investments can be made into custom versions of popular LLMs with full privacy. Enterprise plans for these LLMs do not train their models on your data, and employees cannot ever opt in to do so.

Anonymizing your prompts is another easy, and inexpensive, way to protect privacy. When using AI tools, “anonymizing your prompts” means removing any personal, sensitive, or identifiable information from the instructions you give the AI. This approach ensures the data you input into the system doesn’t compromise anyone’s privacy, including your customers, employees, or business. For example, instead of entering specific names, addresses, or account numbers, you use general or fictional placeholders.

How to Anonymize Your Prompts

  1. Remove Identifiable Details: Replace specific names, emails, phone numbers, or other personal information with generic terms. For instance, instead of “John Smith’s account balance is $5,000,” use “Customer X’s account balance is $5,000.”
  2. Use General Descriptions: If you’re asking for advice on customer service emails, avoid copying and pasting real conversations. Instead, summarize the context: “A customer was unhappy about a delayed shipment. How can I respond to resolve this issue?”
  3. Leverage Dummy Data: When using AI tools, insert fake data that mirrors the structure of the real information without containing any actual details.

Prevent Hallucinations with AI Ethics

The term “hallucination” is Silicon Valley jargon that basically means “the AI is making it up.” And the reality is, the AI is always making it up. AI doesn’t “know” anything. It just makes exceptional mathematical predictions of the correct next word based on its training data. If that data is insufficient, ambiguous, or out of date, then there is a higher likelihood of hallucinations.

LLMs get it wrong about 5% of the time. Stated another way: they only hallucinate correctly 95% of the time. Common hallucinations stem from incorrect calculations (most LLMs suck at math), inaccurate facts, and information pulled from unreliable sources. Many thought leaders predict that AI will not see mass adoption until incorrect hallucinations consistently fall to 1%.

Until that time comes, your best defenses against hallucination are writing better prompts, triangulating outputs from multiple LLMs, and ensuring all final verifications come from a human.

How to Write Hallucination-Resistant Prompts with Three Building Blocks

  1. Context ensures enough background information has been given to the AI. Leave no room for assumptions.
  2. Boundaries restrain the AI’s “thinking”. Treat the AI like you would a toddler.
  3. Reasoning allows you to check the AI’s logic. Ask for sources and make the AI show its work.

Avoid Bias with AI Ethics

AI models inherit human biases. Logically, if biased humans are providing its training data, that AI will in turn be biased. That bias is then amplified by algorithms. Because AI trains on its own outputs, when those outputs are biased, the bias perpetuates. As a result, when the AI sneezes, we all catch a cold.

Similar to hallucinations, better prompt writing mitigates bias. Deploy the following building blocks to prevent bias:

  • Context tailors AI responses around your audience or circumstances.
  • Role assigns a particular perspective and lens for the AI to look through.
  • Specific Requirements allow you to add nuance to the expectations for your output.

Also similar to hallucinations, check outputs for bias by testing them in another LLM. And of course, make certain the final check comes from a human.

Bias related to cultural differences, situational nuances, and other contextual misunderstandings is easy enough to grasp. But also be on guard for AI’s subtle biases:

  • AI is eager to please – it won’t criticize you.
  • AI leans positive.
  • AI avoids ideas and recommendations that might be considered controversial.
  • AI might be quick to say it doesn’t know something if it can’t look up an obvious answer.

Respect Social Impact with AI Ethics

Artificial intelligence is reshaping society, both positively and cautiously. AI brings challenges, particularly when it comes to labor, creativity, and the spread of information. Understanding these impacts is crucial for business leaders to harness AI responsibly.

One of the biggest concerns with AI is its future impact on jobs. Some prognosticators believe AI has the potential to disrupt industries by replacing certain roles entirely. But technologists and many business practitioners believe that negative prediction is fueled by human beings’ natural aversion to loss. In truth, current best practices focus on using AI to enhance, not replace, you and your team members.

When onboarding new AI technology in your business, you must take care to communicate to all levels of your company that everyone’s job is secure. You want your employees to use AI tools, not fear them. So teach your people how to use these tools to do their jobs more effectively, not do their jobs entirely. For example, an AI tool that helps with inventory management or customer support can allow your employees to focus on building relationships and solving more complex problems, rather than worrying about being replaced.

AI also touches the flow of information, which comes with its own set of ethical questions. AI’s ability to generate content raises concerns about misinformation. If left unchecked, AI tools could inadvertently produce false or manipulative information that spreads quickly. But by conditioning yourself to prevent its chinks in the armor, like hallucinations and bias, verifying all content produced by AI, and using it as a complement to human judgment, business leaders can confidently embrace AI as a force for innovation and positive social change. Tiger Eye Logo

Are you ready to start implementing AI into your marketing strategy? Let’s chat.
Write to Jared at 
[email protected].

Disclosure: ChatGPT helped ideate the first draft of this article. The author revised subsequent drafts and contributed original copy to better reflect the intended message and voice.

RELATED CONTENT