As we build new applications, we need systems that will provide guardrails, and ensure the correct functioning of ever more powerful tools.
Professor Adam Chipala is working on ground-breaking ways of verifying system functionality, tied to cryptography and math.
Introducing the concept of provable software systems, Chipala starts with a broader appeal to verification and the need to really “vet” software for use.
“As we think about deploying AI systems in the physical world,” he said, “we really want to make sure that they do what they are intended to do.”
In expounding on some pretty promising solutions, he described how you can apply some heavier principles to ensuring software precision.
“Most people, when they hear ‘AI’, they think about machine learning or statistical methods,” he said, “but there’s an older tradition grounded in mathematical logic where we can have strong mathematic guarantees (about software functionality).”
Interesting…
Articulating the end goal, he spent some time talking about making processes reliant against bugs, and systems reliant against vulnerabilities.
“Can we come up with principled methods of developing computer systems where we’re much less likely to be surprised by bugs?” he asked. “That will have big economic consequences. … when you succeed in that, what you get is this lovely result where all of the pieces don’t need to be trusted anymore – you’re guaranteed that if you manage to carry out that proof, you’ve caught any consequential bugs.”
In that case, he added, you can have more confidence in the processor, in the machine code format and the compiler.
Here’s a really fascinating part. Chipala shares this prototype of the “IoT light bulb” that had some simple components hooked up to a logic board. The only function, he explained, is to turn the light bulb on and off. Therefore, the system is more provable.
He also shows a case study with a Lego garage, with a cryptographic library and cryptographic protocol.
These are neat systems, but there are still guidelines to be worked out, in figuring out how to support the right result.
“When you want to prove that the system is correct, you want to formalize what ‘correct’ means,” he said.
Chipala talks about code injection vulnerability, for instance. Simply, if you have defined the operations well, your software is less open to injection attacks, partly because they will be poorly targeted.
“If you managed to prove almost any …. hypothesis,” he said, “it could be a little off, but (you’re) definitely going to be defended against the most common and important types of security problems.”
Adam shared “I am hopeful that these kinds of methodology and tools will become increasingly used to build systems alongside their formal mathematical proof of correct behavior,” he said, “and everyone who relies on this will be able to check these proofs, and not have to trust anything.”
I thought this was pretty relevant to the process of creation and deployment.
Read the full article here