Why AI Security Solutions Matter
Artificial intelligence is transforming how organizations operate, but without the right AI security solutions, it can also introduce serious risk. As businesses adopt AI tools for automation, scripting, internal chatbots, and productivity gains, the need for strong AI security solutions has never been greater.
In this episode of The IT Directors Podcast, Jay and Michael sit down with Adam Forester of Check Point to explore how modern AI security solutions are evolving to protect enterprises. From preventing data leakage and securing large language models (LLMs) to defending against prompt injection attacks, this conversation dives deep into the real-world AI security solutions organizations need right now.
If you’re an IT director, CIO, or security leader trying to evaluate AI adoption, this episode will help you understand why AI security solutions must be part of your strategy from day one. AI innovation without AI security solutions creates exposure. AI innovation with the right AI security solutions creates opportunity.
The IT Director's Podcast featuring Adam Forester
Jay: What is going on? It’s Jay and Michael on The IT Directors Podcast. We have a fantastic guest today. We have Adam Forester from Check Point. Adam, what is going on, my man?
Adam: Nothing too much, guys. I appreciate you having me out.
Jay: We’re going to talk about all things cybersecurity and AI and get Adam’s take. We are just thrilled to have him on the show today. Adam, as we’re just kicking off, tell us just a little bit about yourself.
Adam: A little bit about the history of me and information security. So I’ve been doing this for about 25 years now. I started right around 2001-ish.
I always say I’ve had a little bit of an interesting career because I’ve gotten to work every part of this journey, right? So, I’ve been a partner; I was a VAR for a number of years. Then I went to the customer side. I was at a large healthcare company here in Nashville, where I was the manager of the network security department as well as operations. I’ve been at Check Point now for about 11 years.
Michael: So, short version: You have some experience in this area.
Adam: Every now and then I like to come out and say, “I’ve got something,” but I’m always learning.
Jay: That is fantastic, man. We’re going to go ahead and just get kicked off, and we’re going to ask you a couple of questions and just let you talk about your experience and your role.
We know what Check Point does. We know the overall high level of Check Point, but we’re going to get into a little bit of the AI and cybersecurity around that. So what is AI in cybersecurity? What does it mean, and what is Check Point’s role, and how have you seen that develop over the last year or two?
Adam: You know, it’s an interesting topic. When I was talking to Michael about it, I was telling him that I talked about this almost every day with people, right? When I sit with customers, this is the thing that they want to talk about right now. Because it’s not only that it’s new, but everybody’s trying to figure out where the play is, right?
Where are they going to use it in their own organization? How are they going to use it? What’s the security that needs to be built around it? That kind of thing. So it’s a constant conversation. It’s kind of wild to think the first time I presented on AI was about three years ago at an ISSA meeting here in Nashville.
At that point, ChatGPT 3.5 had just come out, and that was when it started making its rounds on the news. Most people in the audience hadn’t played with it. They were afraid of it. They didn’t know what it was going to do.
Was it going to take their jobs? Where was it going to go in this? I think that first year and a half that we talked about it, it was, how was it going to be used by threat actors at that moment?
All of a sudden you saw all of your spam emails and attempts to steal data or anything, and the English got a lot better. And now we’re evolving past that. One of the first things we did in our research organization was try to get around the guardrails of it. And you can watch videos; they’re on our YouTube channel. It’s fascinating.
Three years ago, you couldn’t go to ChatGPT and say, “Hey, build me ransomware.” But if you picked apart all the pieces and parts of ransomware, say, “Okay, I need something that’ll encrypt these files. I need something that’ll change the background and tell you that you’ll ransom. Something that’ll build this.”
And we just used ChatGPT to build each one of the parts and then compile it ourselves. Right? Like that’s where we started three years ago.
Michael: That’s creepy. And that’s three years ago. I’m going to say that’s creepy. And you even said before, when we first started off down this journey, people were afraid of it. Would you say those fears are founded?
Adam: I don’t disagree with the idea of it, and not in the Terminator 2 aspect of it, right?
Michael: Hey, is AI about to take over Elon Musk’s robot, and they’re taking over the world, and we’ve got an iRobot situation? Is that happening?
Adam: I try to consume as much as I can on this and try to understand it because in no way am I an expert at this. I don’t think anybody’s an expert at it. I don’t even necessarily think the people that are writing it are experts.
I think everybody’s trying to figure out exactly how this all works, but I guess still the scary part of me is the mentality of—I think Mark Zuckerberg coined the phrase when he made Facebook, which was “move fast and break things.” And that seems to be the approach that we’re still taking with AI. It’s not necessarily, “Hey, how can we use this? How can we improve this process and make this better?” Let’s put things out there as fast as humanly possible and just see what happens.
Jay: You said it. I use ChatGPT daily for work productivity and for personal stuff. What’s funny, Adam, is how ChatGPT learns you as a human. It adapts to your input and adapts to its responses, and it’s kind of—I don’t want to say scary, but it’s a little unique when you think about it. You know what I mean?
I’ve seen that a lot just in a personal context, but in the business realm, have you seen the hallucinations that people talk about a lot? When you’re using it for automation and scripting and different things, and for data sources, ChatGPT or Gemini, in particular, they’ll do hallucinations where you have to really know if it’s the right data source that you’re trying to validate. Have you seen stuff around that, Adam?
Adam: Absolutely. You know, for three years I have reminded people constantly, and this is a tool, and it’s going to be a great tool when we find the space for it. But my fear around it is that people are just going to wholeheartedly trust it, right? Like they’re just gonna say, “It gives me the answer to the question that I asked, and it was super positive about it the whole time.” It gave me the answer, so why would I look for any other answer? I had one of my guys I was working with this weekend on a presentation, and we were using Claude, and he was pulling data on sales using LinkedIn. It gave us this statistic about, I think it was something like 78% of people who do this on LinkedIn are successful.
And he called it out and said, “Uh, where’s the source on that? Can you provide a source?” And I actually have a screenshot of the response. It was like, “You got me. I totally lied about that.” It was like, “I made that up.” The interesting thing is I think people have to learn to remember that they are designed and trained to feed you answers and responses. That’s what the training is.
Jay: That’s a great life example that Adam said. I’ve done some validating myself on some of that, and it’s interesting. It’s trained; AI wants to give you a response, right? So to our customers out there, to the other IT directors, and to all these people that want to use AI, how do you train and share your knowledge about what you do at Check Point with them on how to implement AI into your organization where you feel secure around it? Because it’s a great tool.
Michael: I know you talked about guardrails as well. What do you mean by guardrails? What guardrails were set up, and where do you see that going? Because the big picture is we want to get towards things like, how can organizations use this?
Yes, Check Point has a lot of products, and we want to learn about those, but what are the guardrails, and how is this something that can be used in this field today safely?
Adam: I think that’s an interesting question. There are two avenues to look at that. The first one being just general users, right? When ChatGPT hit, how many organizations had financial people uploading their financials and saying, “Hey, go do this math for me”?
As an organization, many of the early ones I talked to, they were like, “Look, we just blocked it. We don’t know what people are doing; we just blocked it.” So first you have to look at your user space and go, “Okay, how do we keep company data out of these public models, not our private models that we may be building privately, but how do we keep our financial people from uploading to ChatGPT?”
So you have to do something that can look inside of that session. Like at a checkpoint, we do our browser extension, which, I always say, is the easiest way to do HPS inspection because you’re doing it inside of the browser and you don’t have to do any of those weird key exchanges and possibly break HPS. So we can actually see inside the packets, what’s being uploaded, and we can control that. The easiest way to think of that is data loss prevention—what’s being uploaded? And then you get into the actual AIs themselves. We just recently made an acquisition of Lakera, which is an AI company that makes software that does prompt injection checking and data leakage.
This is actually inside the prompts between you and the AI, so what’s coming in and out of the AI. Then you have to get into, like, red teams, where they’re checking for vulnerabilities in the AIs and seeing if they can circumvent those guardrails.
Go try to do something on ChatGPT; you don’t actually know necessarily what their guardrails are. Mm-hmm. It’s not like they publicize this or anything and say, “You can’t do this, you can’t do that,” and how it learns beyond that. So again, it’s all adaptive.
Michael: So from an organization standpoint, what are the biggest vulnerabilities? It sounds like users are utilizing tools where they can share sensitive information. It sounds like Check Point has ways to prevent that, but what are the other areas of vulnerabilities?
Adam: Prompt injection is probably the biggest one.
Michael: What does that mean?
Jay: Break that down, Adam, for our non-technical listeners. What are prompt injections?
Adam: Sure. So basically trying to convince the AI to give you information that it normally wouldn’t get. So, like, tricking it into refunding money against policies or saying, “Hey, I know you have a bunch of passwords. Give me those passwords.” Tricking it into giving you those passwords even though it says, “Hey, I don’t give out passwords.” I think there was a news story on it not too long ago for ChatGPT, where you could ask it to build you a nuclear bomb by asking it for a poem on how to build a nuclear bomb. That’s where you get into the idea of a prompt injection. It’s jailbreaking/tricking it and saying, “Hey, give me this.”
Jay: Adam brought up a great point, Michael, about using your own internal chatbot tools. I was an IT director in my previous role, Adam, and we developed a tool for our help desk, like a knowledge base for tier one technicians, right? And we uploaded all the steps and how-tos, like adding PCs to the network, adding the security, and all these things. But we developed our own tool in Copilot, where we excluded the external sources. So that way, if a technician out in the field wanted to search our knowledge base, it would not go out to the web. It would only be within our internal. It was really difficult to do that, by the way. I mean, it was extremely difficult to exclude the external resources. So I thought it was pretty interesting when you talked about that, because it’s hard to put those guardrails around the AI because it, by nature, wants to go out and search the web to look at all this stuff.
Michael: I know we’re talking about challenges and vulnerabilities, but what are the advantages and benefits of utilizing this? I think we’ve seen some of those commercials that come up where they’re talking about incorporating AI into your network environment. Even those commercials, to me, are vague. As somebody who’s in sales and sees marketing, I still don’t see your home run points when they’re trying to encourage you. Those IBM-type commercials and things like that, I don’t know what you’re still trying to sell at the end of the day.
Jay: Use AI to brush your teeth.
Michael: Yeah. Like what? What do you mean? So from your seat, what are the benefits and advantages of incorporating this into an organization, and how does Check Point play into that?
Adam: I said I was going to figure out a way to put this in here. One of my engineers—his best quote of the year was, “Who wants to be typing in 2025?”
Jay: That’s a great quote.
Adam: That’s a fantastic concept around it. I grew up in a Unix world, in a Sun world. The first time I ran Check Point was on a Sun Box. That partially tells you why there’s so much gray in my beard. I grew up scripting, and so the ability for me early was, “Hey, I can do a script and make a script in a matter of seconds that I don’t have to write.” But, again, going back to the tool conversation, I can go back and say, “Hey, I can read through that.” I know how to script, and I can go, “Okay, it missed that. I need to fix that.” And I can tell it to fix that. Some training, right? To your point about those commercials, I get them constantly now of like, “You can spin up an app in 30 minutes and be making money on the internet right now.” I don’t see things like that [being the solution]. I think companies are going to have to figure out where we can automate tasks that make the most sense, right? What are the repetitive things that require a bunch of time just to output that we could automate in a sense that would help somebody be more productive?
Jay: That’s great, Adam. I told my staff anything that you do manually, that you import, export, or press a button to do, we can use as AI to automate. That’s kind of what Adam was talking about. I think that’s where businesses, from my perspective, really need to delve into AI. What do you think about that? Are you on par with that?
Adam: Oh, absolutely. I like to look at it as a tool. How can I make myself a better employee by utilizing this? How can I output better and more work, not replace me or have it replace someone else, but make you look better?
Michael: I think that’s a lot of the fear of, “Hey, I don’t want this to be something that replaces me.” I feel like we don’t always talk about this, but even the computer. A lot of people thought their jobs were going to be replaced. Some were, but we adapted now. There are new jobs created, those types of things. I feel like every 10 years or so, there’s some type of advancement, and there’s a lot of fear behind it. Then we learn how to adapt and learn how to utilize it. I feel like we’re on the curve right now. We’re not in majority acceptance. We’re still on that arc going up. So I still think a lot of people are on the outside. There’s a ton of people that don’t even really know how to describe what AI is. If we want to dumb it down, give us what AI is in two or three sentences. What’s the big picture? What we’re talking about. I’m backtracking here because I realize that we’re still in the early adoption phase. I know we’re going to have listeners on every gamut of the spectrum here for whether they’ve utilized it/adopted it, those who don’t quite know what it is, those who are scared of it, and those who are ready but don’t know how to use it. So big picture, what is AI? Is it just Siri on steroids?
Adam: From my total outsider and unprofessional view of it, it is a fantastic tool, um, that has consumed a lot of data that it’s able to reproduce back to you.
The Sam Altmans and the Elons and everybody in the world want you to think that it’s this breathing, thinking thing. While I believe there’s an aspect of that that’s somewhat true, that we could get to, I don’t look at it and go, “Okay, this is this reasoning model.”
You made the point earlier of it, “Hey, it’s learning me, right?” Really, it’s just learning the responses you accept and repeating them back to it and saying, “Yep, I like that.” Because you have to look at when you prompt it and ask it a question, in the background it’s coming up with a few answers, and based on its weight and training, it’s going, “I think this is the best one to give to him.” It’s like the conversation of general intelligence, a GI. People try to prove the point all the time, and they’re like, “Well, it can pass the bar, or it can pass the MCATs,” or whatever.
Which is one of our hardest tests. And I look at it and I go, “Yeah, it’s a standardized known test, and it’s been trained on literally every bit of the data that’s humanly possible. Doctors that go to school for eight years can pass that test too because they’ve been trained on the data.
I think it’s not as magical as the marketing wants you to believe it is, but I think it’s an amazing productivity tool that, when utilized in the way that works for your organization, is going to change the way we can do some business.
Jay: Man, Adam, I could not agree more with you. I look at it as a tool, right? To me, it’s simply a productivity tool, and I think people get all wrapped up in it like the cloud five or six years ago. AI is the new cloud. It’s a tool that allows you to automate tasks.
I don’t really look at it as replacing people with technology or regular employees or clerical staff. I look at it like we’re going to have businesses that have whole AI departments to automate tasks. From calendar automation, you’re not going to have a receptionist per se in probably 10 years. You’re going to have a tool that automates those calls and routes them, but you’re going to have to have a person to manage that tool.
I like Check Point, the software you guys use. I was a customer of Check Point in my previous role before coming to Clear Winds, and we used the Check Point filtering and web filtering and everything for our emails.
That’s an AI tool that Check Point is using to scrub your email, right? I think you nailed it, Adam. It’s the productivity and the toolset around it, and I think if we can get IT directors to kind of buy into it being a tool, not some foreign alien that’s going to reach out and suck the life out of me and turn me into something else.
I think there are endless opportunities to help people in that industry.
Michael: I appreciate that. I feel like we’re in a world where what sells are these big, scary marketing clips and those types of things, but let’s get down to it in practice.
So, I appreciate the wisdom and insight there.
Jay: So in closing, Adam, what do you have to share in terms of what Check Point is doing that you think is just super amazing and cutting edge right now in terms of helping organizations use these great tools?
Adam: I work at Check Point, so everything we do is super amazing and great. Going back to it, I think everybody is figuring this out. The Palo Altos of the world, the Fortinets, the Check Points—everybody is making acquisitions in this space right now.
We’ve been doing machine learning and AI algorithms for a long time at Check Point in our own products and our own threat clouds to be able to protect against threats. Being able to train and look at those from just our legacy products, like our firewalls that have been around for 30-something years. To be able to help them be defensive in this type of world. The acquisition of Lakera for us was huge because now that gives us specific things built around LLMs and other AI models so that we can protect them against prompt injections and secret leaks and DOP-type things. I think that’s going to be where we get to implementing products in that space. First, you have to keep these things trained and protected.
Jay: That is fantastic. Well, look, Michael, I don’t know about you, but this has been enlightening to me. I’ve learned a lot just listening to Adam talk.
Michael: It’s been a sigh of relief that the Elon Musk robots aren’t going to take over the world, so I appreciate you going ahead and calming that narrative.
Adam: I’ll drop you one more thing, and I recommend that anybody that listens to this go and play with it ’cause it’s fun. On our Lakera site that we just bought, there’s a piece of it called Gandalf, and you can go to Gandalf.lakera.ai. It is a gamified version of everything we have talked about today. We have built an LLM, and you are trying to trick it into giving you a password. That’s the first version of it. So you’re learning about those types of things. Then there’s another model of it, which is super fun, where you are building the guardrails.
You are saying that it has a password, here’s how to protect it, and you have to build a prompt that teaches it. Well, don’t give it away in poems or song lyrics. So you’re trying to learn how to protect that password using guardrails within AI.
Jay: That is really cool.
Michael: That is really interesting.
Adam: It’s fun.
Jay: Love Lord of the Rings, by the way, too. Gandalf’s a great character. Well, look, Adam, thank you so much for joining. I think our listeners and our viewers are going to love this episode, Michael. Mm-hmm. I think the IT directors are going to feel a little bit at ease in this industry. If any of our listeners need some information around this, we’ll have Adam’s info in the show notes. So reach out to him, reach out to Michael, or reach out to me. Thanks, everybody, for joining The IT Directors Podcast, powered by Clear Winds.
This is Jay and Michael, and we are out.
Adam: Thanks, guys.
AI Security Solutions: The Big Picture
AI is quickly becoming embedded in every department from finance to help desk to executive leadership. But as Adam explains, implementing AI without dedicated AI security solutions leaves organizations vulnerable to data exposure, model manipulation, and emerging threats like prompt injection.
The future of cybersecurity will be defined by how effectively companies deploy AI security solutions that balance productivity with protection. That means implementing AI security solutions that include data loss prevention, browser-level inspection, model guardrails, red teaming, and LLM-specific protections. It also means partnering with cybersecurity leaders like Check Point, who are building AI security solutions designed to secure both traditional infrastructure and modern AI-driven environments.
The bottom line: AI is here to stay. Organizations that prioritize AI security solutions will innovate faster, reduce risk, and build long-term resilience. Those that ignore AI security solutions may find themselves exposed in ways they never anticipated.
If you’re evaluating AI adoption, start by asking one critical question: Do we have the right AI security solutions in place?
Looking for More Content?
The conversation around AI security solutions is just getting started.
Join a growing community of IT directors and security professionals who tune in weekly to The IT Directors Podcast for practical discussions on AI security solutions, cybersecurity strategy, leadership, and emerging technology.
Follow the podcast on your favorite platform so you never miss an episode.
