Skip to main content
Creating Safe, Secure and Intelligent Systems

Protecting the IP of AI Systems

Researchers in the Department of Electrical and Computer Engineering have developed a defense mechanism against attacks on AI systems that attempt to steal the proprietary parameters — valuable intellectual property — that define how the system works. 

Hologram of Padlock on sunset panoramic cityscape of Bangkok, Southeast Asia. The concept of cyber security intelligence. Multi exposure.

Put simply, cryptography is about keeping communications secure. 

No matter how you define it, though, cryptography as a field predates the now-widely known concept of cryptocurrencies like Bitcoin and Ethereum by at least several decades. 

Before the advent of computers, cryptography was essentially one and the same as encryption — turning intelligible information into nonsense. But by the end of World War II, cryptographic methods had become increasingly complex and were finding more varied applications. 

Modern cryptography relies heavily on mathematical theory and computer science.

Until recently, however, the field had yet to figure out a functional defense mechanism to protect against attacks aimed at stealing an AI system’s secret sauce, so to speak.

Known as “cryptanalytic parameter extraction” attacks — or just cryptanalytic attacks, for short — bad actors can use them to steal the model parameters that define how an AI system works in an attempt to recreate it. 

“Cryptanalytic attacks are already happening, and they’re becoming more frequent and more efficient,” says Aydin Aysu, an associate professor of electrical and computer engineering at NC State University. “We need to implement defense mechanisms now, because implementing them after an AI model’s parameters have been extracted is too late.”

So Aysu and his Ph.D. student Ashley Kurian set out to find the solution — and did. 

This past December, Aysu and Kurian presented a paper on the work at the Annual Conference on Neural Information Processing Systems.

“Until now, there has been no way to defend against those attacks,” says Kurian, who was first author of the paper. “Our technique effectively protects against these attacks.”

Defending Against Math

The first step in a cryptanalytic attack is to submit inputs and look at the outputs. 

“They then use a mathematical function to determine what the parameters are,” Aysu explains. 

So far, Aysu says, these attacks have only worked against an AI model known as a neural network. While that might sound reassuring, the problem is that most commercial AI systems — including large language models like ChatGPT — are neural networks. 

Big data technology Data science analyzing artificial intelligence generative AI deep learning machine learning algorithm Neural flow network analytics innovation abstract futuristic. 3d rendering.
An abstract 3D rendering of data flowing through a neural network.

Neural networks are made up of neurons arranged in layers, which sequentially work to assess and respond to inputs. Data gets processed by the first layer, then the outputs of that layer are passed on to the second layer, and so on. Once the data has been processed by the entire neural network, the AI system determines how to respond.

The defense mechanism that Aysu and Kurian came up with ultimately came down to one key insight about a core principle that every attack they analyzed relied on.

“What we observed is that cryptanalytic attacks focus on differences between neurons,” says Kurian. “And the more different the neurons are, the more effective the attack is. Our defense mechanism relies on training a neural network model in a way that makes neurons in the same layer of the model similar to each other. You can do this only in the first layer, or on multiple layers. And you could do it with all of the neurons in a layer, or only on a subset of neurons.”

Keeping Pace With Technology

Aysu says that the new defense mechanism makes it so that a cryptanalytic attack “doesn’t have a path forward” — while allowing the model to function normally “in terms of its ability to perform its assigned tasks.”

AI models that incorporated the defense mechanism had an accuracy change of less than 1% in proof-of-concept testing.

“We know this mechanism works, and we’re optimistic that people will use it to protect AI systems from these attacks,” Kurian says. “And we are open to working with industry partners who are interested in implementing the mechanism.”

Although Aysu and Kurian have found a way to stop cryptanalytic attacks for now, they’re well aware that it’s only a matter of time until someone finds a new workaround.

“We also know that people trying to circumvent security measures will eventually find a way around them — hacking and security are engaged in a constant back and forth,” says Aysu. We’re hopeful that there will be sources of funding moving forward that allow those of us working on new security efforts to keep pace.”

This article is based on a news release from NC State University.