April 4, 2024

Considerations for Using Generative AI

What is Generative AI?

Generative AI refers to a type of artificial intelligence that can create content. It’s designed to learn patterns from input data and generate output that follows the same patterns. For instance, after learning from a dataset of images, a generative AI could create new images that are similar but not identical to the ones it was trained on. Two popular Generative AI tools are; Microsoft Copilot and ChatGPT. 

Microsoft Copilot is a generative AI chatbot developed by Microsoft that helps with writing, coding, brainstorming, or finding information quickly. ChatGPT is a language model developed by OpenAI. It’s designed to generate human-like text based on the input it’s given. It can be used for a variety of tasks, such as drafting emails, writing articles, or even creating poetry. It’s also capable of carrying on a conversation, making it useful for building chatbots or virtual assistants.

Considerations for Using Generative AI

There are some big concerns about the overall evolution of generative AI, with some tech leaders even calling for a pause in development. But for the individual, safety is a relative term, particularly when it comes to tools. Here’s everything to consider before you jump in.


Don’t Share Sensitive Data

Everything that a ChatGPT user shares with the bot is saved, and ChatGPT has the right to use this information in the future. Which means that nothing you say is private. Share sensitive data with ChatGPT and you've left it fully exposed to the whims of an algorithm.

Samsung learned this lesson the hard way on multiple occasions as three separate employees submitted code, a meeting recording, and even more “confidential source code” to the chatbot, opening that data up to the program.

Some governments are taking a strong stance against the ways in which ChatGPT collects user data. Germany has said it may block ChatGPT over data security and privacy concerns, in the wake of Italy doing just that.

Confirm Sources

ChatGPT's output is a bigger concern than its input. Because the tool operates independently, there's no vetting process for what it decides to tell you.
The same is true for similar generative chat-based AI programs, like Google's Bard. For one new report, 100 different prompts were submitted to Bard, all asking it to explain certain types of online misinformation. In 76 incidents out of 100, Bard created fake content on those topics.

Double-check all sources that the chatbot cites, because sometimes it just feels like making them up.

Check Math and Formulas

The hard sciences aren't exempt from generative AI's habit for fibbing. In fact, even a standard calculator is better! That's because this type of algorithm learns with a large language model, so it “thinks” and speaks in natural language rather than in mathematical formulas. More often than not, ChatGPT will give a natural-language response that's clear, confident, and incorrect.

AI might well be able to handle the complexities of math in the future, but for now, don't rely on it to handle your algebra. At the very least, take every solution it delivers with a grain of salt.

Copyrighted Material

Any machine learning program operates by pulling information from a set of existing information. In some cases, a chatbot might pull an entire sentence from a source. If that source is from any publication younger than 1927, it won't be in the public domain in the United States. If your chatbot is not digesting its information properly, the final result can easily violate US copyright law.
Run your results through an online plagiarism checker in order to cut down on the odds that you'll be violating copyright.

Risks of Generative AI Tools

Privacy and Financial Leaks

In at least one instance, chat history between users was mixed up. On March 20, 2023, ChatGPT creator OpenAI discovered a problem, and ChatGPT was down for several hours. Around that time, a few ChatGPT users saw the conversation history of other people instead of their own. Possibly more concerning was the news that payment-related information from ChatGPT-Plus subscribers might have leaked as well.
OpenAI published a report on the incident and corrected the bug that caused the problem. That doesn’t mean new issues won’t arise in the future. With any online service, there is a risk of accidental leaks like this and cybersecurity breaches from the growing army of hackers.

Additionally, as Copilot integrates with personal and organizational data across Microsoft services, there's a risk of sensitive data exposure. This is particularly concerning in regulated industries where data privacy is paramount.

ChatGPT as a Hacking Tool

Regarding cybersecurity, some experts are concerned about ChatGPT’s potential use as a hacking tool. It’s clear that the advanced chatbot can help anyone write a very official-sounding document, and ChatGPT could be called upon to construct a convincing email phishing scam.

The AI is also a good teacher, making it easy to learn new skills with ChatGPT, possibly even dangerous programming skills and information about network infrastructure. The combination of ChatGPT and dark web forums could lead to numerous and novel attacks to challenge the already stretched resources of cybersecurity researchers.

ChatGPT Scams

It isn’t OpenAI’s fault, but a side effect of any exciting new technology is a surge in scams that promise greater access or new features. Since access to ChatGPT is still limited and sometimes slow, there’s a strong demand for more features and better performance.

Each new update brings expanded capabilities, some of which require a membership and have limited availability. ChatGPT fervor provides fertile ground for scams. Offers of free, unlimited access at the fastest speed and with the best new features are hard to pass up.

Unfortunately, the old saying still holds — if it sounds too good to be true, it probably is. Be wary of ChatGPT offers that come via email or social media. It’s best to check trusted media outlets for news or go directly to OpenAI to confirm any invitations or deals that sound iffy.

How Xamin Can Help

Xamin is dedicated to empowering our clients with knowledge to make the right decisions around AI adoption and integration. Our team collaborates closely with clients to identify opportunities where AI can drive efficiency, enhance productivity, unlock new insights and areas where AI controls need to be established for maintaining appropriate security standards. We specialize in developing customized AI usage controls and policies that align with industry standards and regulatory requirements, ensuring our clients leverage AI technologies responsibly, safely and ethically. By leveraging Xamin's expertise, organizations can navigate the complexities of AI adoption, maximizing the potential of artificial intelligence to achieve their business
objectives while mitigating risk.