We explore the benefits and dangers of cybersecurity professionals using AI tools like ChatGPT in the data centre space – as well as how they’re already being leveraged by bad actors.
The world changed on Nov. 30, when OpenAI released ChatGPT to an unsuspecting public.
Universities scrambled to figure out how to give take-home essays if students could just ask ChatGPT to write it for them. Then ChatGPT passed law school tests, business school tests, and even medical licensing exams. Employees everywhere began using it to create emails, reports, and even write computer code.
It’s not perfect and isn’t up to date on current news, but it’s more powerful than any AI system that the average person has ever had access to before. It’s also more user-friendly than enterprise-grade systems’ artificial intelligence.
It seems that once a large language model like ChatGPT gets big enough, and has enough training data, enough parameters, and enough layers in its neural networks, weird things start to happen. It develops “emergent properties” not evident or possible in smaller models. In other words, it starts to act as if it has common sense and an understanding of the world – or at least some kind of approximation of those things.
Major technology companies scrambled to react. Microsoft invested $10 billion in OpenAI and added ChatGPT functionality to Bing, suddenly making the search engine a topic of conversation for the first time in a long time.
Google declared a “Code Red,” announced its own chat plans and invested in OpenAI rival Anthropic, founded by former OpenAI employees and with its own chatbot, Claude.
Amazon announced plans to build its own ChatGPT rival and announced a partnership with yet another AI startup, Hugging Face. And Facebook’s Meta will be fast-tracking its own AI efforts.
Fortunately, security pros can also use this new technology. They can use it for research, to help write emails and reports, to help write code, and in more ways that we’ll dig into.
The troubling part is that the bad guys are also using it for all those things, as well as for phishing and social engineering. They’re also using it to help them create deep fakes at a scale and level of fidelity unimaginable a few short months ago. Oh, and ChatGPT itself might also be a security threat.
Let’s go through these main data centre security topics one by one, starting with the ways malicious actors could use – and, in some cases, are already using – ChatGPT. Then we’ll explore the benefits and dangers of cybersecurity professionals using AI tools like ChatGPT.
How the Bad Guys are Using ChatGPT
Malicious actors are already using ChatGPT, including Russian hackers. After the tool was released on Nov. 30, discussions on Russian language sites quickly followed, sharing information about how to bypass OpenAI’s geographical restrictions by using VPNs and temporary phone numbers.
When it comes to how exactly ChatGPT will be used to help spur cyberattacks, in a Blackberry survey of IT leaders released in February, 53% of respondents said it would help hackers create more believable phishing emails and 49% pointed to its ability to help hackers improve their coding abilities.
Another finding from the survey: 49% of IT and cybersecurity decision-makers said that ChatGPT will be used to spread misinformation and disinformation, and 48% think it could be used to craft entirely new strains of malware. A shade below that (46%) said ChatGPT could help improve existing attacks.
After all, the AI model has read everything ever publicly published. “Every research vulnerability report,” Hinchcliffe said. “Every forum discussion by all the security experts. It’s like a super brain on all the ways you can compromise a system.”
That’s a frightening prospect.
And, of course, attackers can also use it for writing, he added. “We’re going to be flooded with misinformation and phishing content from all places.”
How ChatGPT Can Help Data Center Security Pros
When it comes to data centre cybersecurity professionals using ChatGPT, Jim Reavis, CEO at Cloud Security Alliance, said he’s seen some incredible viral experiments with the AI tool over the past few weeks.
“You’re seeing it write a lot of code for security orchestration, automation and response tools, DevSecOps, and general cloud container hygiene,” he said. “There are a tremendous amount of security and privacy policies being generated by ChatGPT. Perhaps, most noticeably, there are a lot of tests to create high quality phishing emails, to hopefully make our defences more resilient in this regard.”
In addition, several mainstream cybersecurity vendors have – or will soon have – similar technology in their engines, trained under specific rules, Reavis said.
“We have also seen tools with natural language interface capabilities before, but not a wide open, customer facing ChatGPT interface yet,” he added. “I expect to see ChatGPT-interfaced commercial solutions quite soon, but I think the sweet spot right now is the systems integration of multiple cybersecurity tools with ChatGPT and DIY security automation in public clouds.”
In general, he said, ChatGPT and its counterparts have great promise to help data centre cybersecurity teams operate with greater efficiency, scale up constrained resources and identify new threats and attacks.
“Over time, almost any cybersecurity function will be augmented by machine learning,” Reavis said. “In addition, we know that malicious actors are using tools like ChatGPT, and it is assumed you are going to need to leverage AI to combat malicious AI.”
How Mimecast is Using ChatGPT
Email security vendor Mimecast, for example, is already using a large language model to generate synthetic emails to train its own phishing detection AIs.
“We normally train our models with real emails,” said Jose Lopez, principal data scientist and machine learning engineer at Mimecast.
Creating synthetic data for training sets is one of the main benefits of large language models like ChatGPT. “Now we can use this large language model to generate more emails,” Lopez said.
He declined to say which specific large language model Mimecast was using. He said this information is the company’s “secret sauce.”
Mimecast is not currently looking to detect whether incoming emails are generated by ChatGPT, however. That’s because it’s not only the bad guys who are using ChatGPT. The AI is such a useful productivity tool that many employees are using it to improve their own, completely legitimate communications.
Lopez himself, for example, is Spanish and is now using ChatGPT instead of a grammar checker to improve his own writing.
Lopez is also using ChatGPT to help write code – something many security professionals are likely doing.
“In my daily work, I use ChatGPT every day because it’s really useful for programming,” Lopez said. “Sometimes it’s wrong, but it’s right often enough to open your head to other approaches. I don’t think ChatGPT is going to convert someone who has no ability into a super hacker. But if I’m stuck on something, and don’t have someone to talk to, then ChatGPT can give you a fresh approach. So I use it, yes. And it’s really, really good.”
The Rise of AI-Powered Security Tools
OpenAI has already begun working to improve the accuracy of the system. And Microsoft, with Bing Chat, has given it access to the latest information on the Web.
The next version is going to be a dramatic jump in quality, Lopez added. Plus, open-source versions of ChatGPT are on their way.
“In the near future, we’ll be able to fine-tune models for something specific,” he said. “Now you don’t just have a hammer – you have a whole set of tools. And you can generate new tools for your specific needs.”
For example, a company can fine-tune a model to monitor relevant activity on social networks and look for potential threats. Only time will tell if results are better than current approaches.
Adding ChatGPT to existing software also just got easier and cheaper; On March 1, OpenAI released an API for developers to access ChatGPT and Whisper, a speech-to-text model.
Enterprises in general are rapidly adopting AI-powered security tools to deal with fast-evolving threats that are coming in at a larger scale than ever before.
According to the latest Mimecast survey, 92% of companies are either already using or plan to use AI and machine learning to bolster their cybersecurity.
In particular, 50% see benefits in using it for more accurate threat detection, 49% for an improved ability to block threats, and 48% for faster remediation when an attack has occurred.
And 81% of respondents said that AI systems that provide real-time, contextual warnings to email and collaboration tool users would be a huge boon.
“Twelve percent went so far as to say that the benefits of such a system would revolutionize the ways in which cybersecurity is practiced,” the report said.
AI tools like ChatGPT can also help close the cybersecurity skills shortage gap, said Ketaki Borade, senior analyst in Omdia’s cybersecurity’s practice. “Using such tools can speed up the simpler tasks if the prompt is provided correctly and the limited resources could focus on more time-sensitive and high-priority issues.”
It can be put to good use if done right, she said.
“These large language models are a fundamental paradigm shift,” said Yale Fox, IEEE member and founder and CEO at Applied Science Group. “The only way to fight back against malicious AI-driven attacks is to use AI in your defences. Security managers at data centers need to be upskilling their existing cybersecurity resources as well as finding new ones who specialize in artificial intelligence.”
The Dangers of Using ChatGPT in Data Centers
As discussed, AI tools like ChatGPT and Copilot can make security professionals more efficient by helping them write code. But, according to recent research from Cornell University, programmers who used AI assistants were more likely to create insecure code, while believing it to be more secure than those who didn’t.
And that’s only the tip of the iceberg when it comes to the potential downsides of using ChatGPT without considering the risks.
There have been several well-publicized instances of ChatGPT or Bing Chat providing incorrect information with great confidence, making up statistics and quotes, or providing completely erroneous explanations of particular concepts.
Someone who trusts it blindly can end up in a very bad place.
“If you use a ChatGPT-developed script to perform maintenance on 10,000 virtual machines and the script is buggy, you will have major problems,” said Cloud Security Alliance’s Reavis.
Risk of Data Leakage
Another potential risk of data centre security professionals using ChatGPT is that of data leakage.
The reason that OpenAI made ChatGPT free is so that it could learn from interactions with users. So, for example, if you ask ChatGPT to analyse your data centre’s security posture and identify areas of weakness, you’ve now taught ChatGPT all about your security vulnerabilities.
Now, take into account a February survey by Fishbowl, a work-oriented social network, which found that 43% of professionals use ChatGPT or similar tools at work, up from 27% a month prior. And if they do, 70% of them don’t tell their bosses. The potential security risks are high.
That’s why JPMorgan, Amazon, Verizon, Accenture and many other companies have reportedly prohibited their staff from using the tool.
The new ChatGPT API released by OpenAI this month will allow companies to keep their data private and opt out of using it for training, but there’s no guarantee that there won’t be any accidental leaks.
In the future, once open-source versions of ChatGPT are available, data centers will be able to run it behind their firewalls, on premises, safe from possible exposure to outsiders.
Ethical Concerns
Finally, there’s the potential ethical risks of using ChatGPT-style technology for internal data centre security, said Carm Taglienti, distinguished engineer at Insight.
“These models are super good at understanding how we communicate as humans,” he said. So a ChatGPT-style tool that has access to employee communications might be able to spot intentions and subtext that would indicate a potential threat.
“We’re trying to protect against hacking of the network and hacking of the internal environment. Many breaches take place because of people walking out the door with things,” he said.
Something like ChatGPT “can be super valuable to an organization,” he added. “But now we’re getting into this ethical area where people are going to profile me and monitor everything I do.”
That’s a Minority Report-style future that data centers might not be ready for.