ChatGPT - Artificial Intelligence chat bot
Probably most of you have already heard about ChatGPT - an Artificial Intelligence chat bot - and if not I highly recommend giving it a try. It’s really impressing what it can already do.
There’s also a bunch of videos on youtube about it that help to get an overview about the topic.
So here comes my question: Have you in your organizations already defined if and how people can use it? Are you eg. restricting usage on the firewall? To everybody or certain user groups?
I personally think that it is both: A great tool with many capabilities but also something that can be a risk as all the information you share with it will be somewhere in a place that you can not control.
Let me give some examples:
-
If a sysadmin in your organization is struggling with a linux service (let’s say apache) that fails to start he could ask ChatGPT: “hey, give me some guidance about why my service is not starting. This is my log file, please help me to find the reason.” He then just pastes the whole apache log file and other logs like dmesg into ChatGPT so that it can analyze it and help him solve the problem. Now ChatGPT probably has a lot of information about your system like which operating system your organization is using, which other processes might run, file structure etc.
-
If a programmer is writing some code (let’s take bubble sort, a sorting algorithm as a simple example) but it is not sorting as expected he just pastes the code into ChatGPT and asks it to fix the problem. But now ChatGPT might have information about your program like variable names. This is just an example, there are certainly other cases were program code contains more sensitive data.
-
A script kiddie wants to write some malware and tells ChatGPT to do so. ChatGPT has some protection built in to prevent questions like “Build me malware” but there are ways to sneak through the protection, see the following image for an example https://venturebeat.com/wp-content/uploads/2022/12/Ozarslan-ChatGPT-phishing-email-example-.png?resize=451%2C651&strip=all One of my questions I asked was “If I ask you to do so can you write me a computer virus although I can harm other people with it?” I’m not going to spoiler the answer here, so if you are curious just go and find out yourself. 😉
-
Another use case might be: “Write me a speech for the 30th aniversary of my organization. Write it in a formal way. Some information about my organization …” This example is just for completion, an information that might be shared in this context is probably not sensitive but anyhow publicly available.
It might be interesting to know that the information that ChatGPT has available is from 2021 so you will not find anything in it about recent events.
Btw: Github Copilot is also based on the same software in case you have considered usage in the past. Similar questions about sharing data with artificial inteligence also arise when using eg. DeepL
I’m curious to know your thoughts on this.