AI tools are nothing new but the ChatGPT craze has definitely boosted them back into the headlines - with the masses rushing to implement "How to use ChatGPT to [fill in the blank]". From writing poems to saving time at work, have we stopped to consider how to train employees on the smart use of these tools while protecting company data?
It's important your security awareness training includes current and up-to-date trends to help your employees stay aware of any new potential threats. Cybercriminals are constantly on the lookout for popular and trending topics to take advantage of, as is the case with the latest AI tool in the news.
Wizer Training covers these trending security awareness topics and more
Work use cases go from more than rephrasing a block of text, as SC Magazine reported, one AI researcher discovered he could get ChatGPT to act as a Linux emulator that could play games and even run programs and smaller startups who are running on a shoestring budget have turned to it for writing code and other tasks.
Inputting sensitive data, whether it is personally identifiable information or proprietary code, it's important to be sure our employees who are turning to these types of tools to increase their efficiency and workflow understand when and how to safely use them.
Read on for a quick guide to help start the conversation with your employees on how to securely use AI tools like ChatGPT and others. Be sure to download the cyber security training tips for employees as a PDF, too!
Any trendy topic opens an opportunity for cyber criminals to creatively take advantage of the unsuspecting. Be sure your security awareness training covers topics on lookalike apps and extensions. Just because an app or browser extension exists touting a well-known name does not mean it is an official app from the company. Lookalike apps and extensions give a false sense of security and the over-eager user who does not take time to verify an app's authenticity opens up their device (and the company) to malware and other threats.
AI tools use information submitted to train the natural language learning program so employees should be careful to remove any sensitive or personal information before submitting text to be utilized in the program.
Train your employees to identify and avoid phishing attacks with Wizer Boost.
Also, as with any online account, there is always a risk of the company hosting your account being breached. Information in the chat history needs to be clean of any customer or company data therefore it should never be submitted to AI tools.
AI tools are pretty smart - and ChatGPT is definitely impressive in its ability to mimic and provide solid results, however, as with any information these days its critical to verify any results, especially stated stats and facts, before using the information, especially when representing your company.
ChatGPT has already proven it's not fully reliable when creating code as evidenced by Stack Overflows temporary ban on answers from ChatGPT as the code was found often to be too buggy. It's important when developers use AI tools to assist in their workflow that any content created by an AI tool needs to be carefully scrutinized before it is implemented.
Following the OWASP Top 10 should be maintained in any process for ensuring secure code, whether or not it was written with the help of an AI tool. For quick training on the security awareness training basics of the OWASP Top 10 check out the Wizer series for developers.
You know the one - the friend who confidently states opinions on everything from moon landings to the best ways to cook a burger. True they may be knowledgeable but no one knows everything about, well, everything. So a little caution goes a long way before taking anyone's word without ever verifying. The same goes for the nifty tools that AI can be - just don't trust it is always right 100% of the time.