Cyber risk, once relegated to IT departments and CIOs, has quickly become an enterprise-level concern. Cyberattacks can have devastating effects on a company’s reputation, financial health, operational performance, regulatory compliance, and even legal standing. The ever-increasing number of cyberattacks each year serves as a constant reminder of the costs of security lapses.

With the recent explosion of AI tools like ChatGPT, and the growing visibility of AI-powered business software, a new wave of questions has arisen: what security vulnerabilities does AI introduce to the enterprise, and how will threat actors evolve their methods to exploit them? More importantly, how can companies practice effective cyber due diligence while still embracing the cutting edge technologies available to them?

In a recent webinar presented by Vaco and MorganFranklin Consulting, a renowned expert in AI explored the unique security implications of artificial intelligence and cloud applications.

Our panel

Presenter: Joseph Perry is an Advanced Services Lead at MorganFranklin Consulting

Host: Cortney Hancock is a Senior Manager in within MorganFranklin’s cybersecurity practice

What is artificial intelligence (AI)?

Joseph Perry, who leads MorganFranklin Consulting’s AI services function, began his career in cybersecurity for the U.S. Navy and has since worked across multiple domains in the NSA. Perry kicked off the presentation by taking a deep dive into the fluid definition of AI and its various permutations in both business and popular culture.

Perry provided three functional definitions for artificial intelligence:

The Umbrella term

Machines that are capable of performing arbitrary tasks with competency that meets or exceeds humans’
Any piece of technology that processes data and produces a result for which it was not explicitly programmed
AI is also synonymous with fraudulent tech products or services sold online, particularly in the cryptocurrency space

The Technical Term

Computer-based systems which perform tasks associated with human “intelligence,” from content creation to image processing to automated trading​
Tools based on neural networks or other mathematical facsimiles of a human brain​

The Practical Term

Tools that perform tasks more complex than their operators can understand
Tools that inherently reproduce the perspectives and biases of their creators​
A murky and hype-driven market filled with bad actors

Contextualizing AI in operation

Since the average person’s understanding of AI is still limited to the most common or accessible definitions, its operational functions and, more importantly, its limitations are also poorly understood.

With this in mind, Perry outlined three key operational functions of AI technology:

Recognition

  • Optical Character Recognition (OCR)​
  • Image classification​
  • Data categorization

Operation

  • Data processing​
  • Trend analysis​
  • Productivity augmentation​
  • Automation

Generation

  • Converting knowledge bases to chatbots​
  • Image, audio, and video creation​
  • Code generation and self-improving systems

How does AI manifest in cybersecurity?

Another key concern within AI adoption is how it slots in with an organization’s existing cybersecurity structure.

There are four primary areas where AI plays a role in cybersecurity and cyber risk management.

Adversarial machine learning

The process of intentionally introducing malicious data into an AI system to test its vulnerability and discover weaknesses in the system before attackers can exploit them.

Behavioral analytics

Utilizes machine learning (ML) algorithms to identify unusual patterns that may indicate potential security threats.​

Notable examples include:​

  • User and entity behavior analytics (UEBA)​
  • Network traffic analysis (NTA)

AI-powered security operations

AI-based threat detection identifies various types of threats (e.g., malware, phishing attacks, and ransomware).​ AI-powered authentication, such as biometric authentication,​ verifies users, reducing risk for organizations.

Cybersecurity automation

A process that supports patch management and vulnerability scanning and quickly identifies and responds to threats.​

With security orchestration, automation, and response (SOAR), cybersecurity and IT teams address the overall network environment more efficiently.

Cybersecurity and AI

Widespread AI usage is still in its infancy, and many of its capabilities and limitations are not well known. This creates situations where many people, including business owners and their employees, are frequently using AI tools without understanding them.

Perry specifically cites generative AI when discussing the risks of increased AI usage without increased knowledge of how these tools work.

“People are using AI to generate documents, and copying and pasting the information, or sending data to tools that are really just a layer over AI to steal your information,” he said.

Perry also pointed to the risks in using AI to replace human writing and communication, such as asking an AI to generate corporate communications that should really be conceived and written by the person sending them.

“AI and specifically ChatGPT is really good at technical documentation,” Perry said. “If you ask it to help you generate queries, if you ask it to help you generate a technical report, it’s professional and crisp. But people who say AI produces text that is indistinguishable from human text are telling on themselves. AI is just not there yet,” Perry said.

This isn’t because companies producing AI solutions don’t know how to train AI in authentic human communication styles; it’s because most major AI models have been primarily trained on abundant, readily available data. That means technical manuals and reference guides, not original fiction, nonfiction, poetry or essays.

As of now, humans are still the primary controllers of what AI knows, and we inevitably reproduce our own perspectives and biases when training AI models on particular datasets.

“We are building its corpus of knowledge,” Perry said. “And so the tools and methods we use for collecting that information will cause that AI to replicate our process.”

Read more: Joseph Perry for Security Magazine, “Unlocking the power of generative AI

Developing acceptable use policies (AUPs) for AI

Cybercriminals are using AI and machine learning tools to attack and explore victims’ networks, spot vulnerabilities and potentially exploit them. The speed and sophistication of these maneuvers further exacerbates the potential risks of unregulated AI usage within an organization.

“People are using AI tools without doing a risk assessment on them because they’re blinded by the complexity of the technology, legal and regulatory challenges,” Perry said.

This is where acceptable use policies, or AUPs, become imperative.

What is an AUP, and why is it essential?

An acceptable use policy is a document that stipulates how, when and where a particular tool or resource can be used by an organization’s members. An AUP for accounting and finance, for instance, should address specific needs and risks associated with handling sensitive financial data.

An AI usage policy is essential because it provides employees with explicit guidelines for using and interacting with these risk-prone tools, while also educating them on the threats and consequences that arise from misuse. In work environments where sensitive information is abundant, AUPs are critical in maintaining security and compliance.

For artificial intelligence, Perry said, organizations should create both a general AI policy and a more specific set of guidelines for generative AI.

Securing cloud applications

Cloud applications offer significant benefits and efficiencies for multiple business functions, and those benefits are reflected in their rising popularity: a report from Mordor Intelligence projected that AI usage in the accounting industry will grow 30% from 2023-2027.

Like artificial intelligence, cloud usage brings an increased risk of cyberattacks and vulnerabilities, making it imperative that organizations develop and implement best practices for securing cloud assets.

Four vital cloud security practices:

  1. Making sure only the right people can access data (Access Controls)​
  2. Regularly backing up data for emergencies (Data Backups)​
  3. Training all team members on security best practices (Employee Training)​
  4. Ensuring data privacy laws and regulations are followed (Compliance)

Wrapping Up

Artificial intelligence and cloud applications are powerful business tools that can revolutionize the way an organization and its team members perform. But there are inherent risks with any new or complex technology, and there are always bad actors waiting to exploit an organization’s negligence of risk or miseducation on how their tools function.

Human behavior still plays a crucial role in advancing cybersecurity within any business. Awareness, training, and vigilance are key to preventing security breaches, ensuring data protection and mitigating potential threats as companies adopt and integrate these powerful tools.

Want to learn more about AI and cloud security? Watch the full presentation on-demand.

Learn more about MorganFranklin’s cybersecurity services.

LET’S WORK TOGETHER

We are experienced, engaged professionals that are highly energetic and motivated to work in challenging, high stakes environments.