Safety Tips for AI/Chatbot Use

Summary

AI/Chatbots offer many tools that can make tasks much easier. However with any new technology there are risks and the opportunity for malicious actors to utilize these tools for their own goals. These tips will help provide some security guidance when using AI/Chatbot technologies.

Body

AI and chatbots offer a robust array of tools that can make many everyday tasks much easier. They often have a wealth of information in their database to draw upon. However, there is the possibility of error or that malicious actors will take advantage of these tools to achieve their own goals. Some things to consider when using a chatbot or AI program: 

  1. What information am I giving this technology? AI systems draw information from a database (or collection of databases).  This information is the foundation that “teaches” the AI to better respond to questions and tasks.  Unless otherwise specified, an AI will record and store all information it is provided.   This poses issues if you provide it with personal or confidential information.  Once the chatbot has it, it is in there forever.  Always be aware of the information you are providing. Never provide personal or confidential information. Also, be aware of the combination of data you are providing. Although a single detail might not seem like an issue, multiple details, even over multiple queries, could amount to providing personally identifiable information when put together and analyzed. 

  1. Do these answers make sense? Designers use a database and set of algorithms to teach AI and chatbots how to perform its functions. This information can often come from a wide variety of sources. Based on the information and code used this can lead to some wild, even hilarious responses to questions. Always review the answers you receive from AI/chatbots with skepticism and validate the information before accepting it as true. Depending on the technology, getting the answer “Neil Armstrong” to the question “Who was the first person to walk on the Sun?” might not be outside the realm of possibility. 

  1. Deceptive Tactics – AI and chatbots can be used to generate fake or misleading information, videos, posts, and other communications. Always be skeptical of communications that sound odd, do not make sense, or are asking for unusual things. These technologies can generate everything from fake social media posts to voice imitations, to full deepfake videos. They can be used to spread misinformation (ex. to influence financial markets), engage in phishing campaigns (ex. imitating the voice of your boss), or engage in some other deceptive practice. Always validate the information you are provided and if something seems too good to be true, it probably is. 

With a complete view of the benefits and hazards of AI/chatbots, we can better evaluate when, and if, to utilize this emerging technology. 

Details

Details

Article ID: 162008
Created
Wed 10/9/24 11:36 AM
Modified
Thu 10/31/24 2:39 PM

Related Articles