Skip to main content

3 posts tagged with "zk"

View All Tags

· 10 min read
Bolton Bailey

Hello everyone! I'm Bolton Bailey, a Ph.D. student and software engineer at RISC Zero. I'm writing this blog post to discuss a project that I've been working on, in conjunction with Runtime Verification Inc., to run a Metamath checker inside the RISC Zero zkVM. Although this project is just getting started, I think it's headed in a very exciting direction. In this post, I'd like to share with you all what makes it so interesting, how we've gone about it, and what it could look like going forward.

What are Formal Methods?

My field of research lies at the intersection of cryptography and formal methods. Formal methods are a mathematically rigorous approach to software development that ensures a piece of code does what it is supposed to do. A formal methods approach goes farther than simply examining a piece of code line-by-line or writing a test suite to ensure the code runs correctly on a number of examples. Instead, we write out a mathematically precise specification of what the code is supposed to do and prove that the code will always behave as specified, no matter what input it is given.

Rather than writing down specifications and proofs on paper and checking them by hand, we get the computer itself to help us. Formalists use specialized computer languages known as formal systems to express their proofs, and they write programs to check these proofs so that they can be sure they made no mistakes. The language I've been working with is the Metamath system.

What is Metamath?

Metamath is a formal system that's designed to be flexible enough to represent proofs in a variety of mathematical frameworks, but also very simple for the computer to understand. The way a computer processes logic can be unintuitive for a human, but this section will hopefully give you an idea of how it works.

Here is an example of some simple Metamath code from the manual. This code describes a simple mathematical system for talking about numbers, with addition and zero as the basic concepts:

$( Declare the constant symbols we will use $)
$c 0 + = -> ( ) term wff |- $.
$( Declare the metavariables we will use $)
$v t r s P Q $.
$( Specify properties of the metavariables $)
tt $f term t $.
tr $f term r $.
ts $f term s $.
wp $f wff P $.
wq $f wff Q $.
$( Define "term" and "wff" $)
tze $a term 0 $.
tpl $a term ( t + r ) $.
weq $a wff t = r $.
wim $a wff ( P -> Q ) $.
$( State the axioms $)
a1 $a |- ( t = r -> ( t = s -> r = s ) ) $.
a2 $a |- ( t + 0 ) = t $.
$( Define the modus ponens inference rule $)
${
min $e |- P $.
maj $e |- ( P -> Q ) $.
mp $a |- Q $.
$}
$( Prove a theorem $)
th1 $p |- t = t $=
$( Here is its proof: $)
tt tze tpl tt weq tt tt weq tt a2 tt tze tpl
tt weq tt tze tpl tt weq tt tt weq wim tt a2
tt tze tpl tt tt a1 mp mp
$.

There is a lot of boilerplate, but the critical part of the definition is the two axioms listed on the lines that start with a1 and a2. Respectively, these axioms state that:

  1. If t = r and t = s, then r = s.
  2. For any term t, t + 0 = t.

As an example of how detailed these proofs can get, let's unpack the very last theorem at the end of the example above, which proves t = t for any term t. The proof consists of a very low-level list of invocations of rules and axioms that show the theorem holds. At a high level, the proof says that if we substitute t from the first axiom above with t + 0, and r and s are substituted for t, then the first axiom becomes "If t + 0 = t and t + 0 = t, then t = t". By the second axiom listed above, the t + 0 = t conditions hold; therefore, the first axiom is telling us that t = t. Notice that we have to use the first axiom once and the second axiom twice (once for each condition of the first axiom). While not all of the symbols in the proof of th1 might be familiar, we can see it includes one reference to a1 and two references to a2.

This may seem like an overly convoluted way of proving a very simple fact. This is because computers need to have every detail spelled out to them. Metamath in particular is designed to not have a lot of built-in optimizations, in order to keep the Metamath checking program as simple as possible. Unlike a human, the Metamath needs to be reminded of the fact that "t is a number" every time t comes up again in the argument. This is essentially what the tt token does - you can see it comes up 15 times in the proof of th1!

What are the possibilities for formal methods?

The example theorem above is a simple one. In principle, however, there's no limit on what can be proven within a formal system like Metamath: we can describe logical systems that deal with set theory or even outline the semantics of programming languages. Within formal methods, programs and data are just different types of mathematical objects, ones for which confidence can be of the utmost importance. The software industry in general, and the cryptocurrency industry in particular, has a lot at stake when it comes to bugs in their systems: billions of dollars have been hacked out of blockchains, and entire organizations exist that look to use formal analysis to help plug these holes.

Zero-Knowledge Proofs for Formal Methods

Having established what formal methods are and why they're useful, let's ask why someone might want to use succinct zero-knowledge proofs in conjunction with them.

Suppose a technology firm wants to be assured that their products have no bugs. They open a bug bounty and offer a prize to anyone who can find a flaw. A formal analysis firm writes a specification of what the code is supposed to do and then attempts to find a proof that the code meets the specification. In their analysis, they end up finding a bug, and they create a formal proof that the specification is not satisfied.

At this point, the formalist firm is nervous that the software writers will look at the proof and say that actually, the bug isn't a problem at all, and this doesn't merit paying out the bug bounty. Besides, the technology firm doesn't want to run this long formal check themselves. It might not even be a technology firm offering the bounty at all, but instead a smart contract.The smart contract is limited in how much gas it can use, so it can't reasonably check the whole formal analysis.

This scenario is ripe for a succinct proof application. The formalists can prove the bug is real without a big data transfer or computation by sending a STARK proof that the formal proof check validated their code in Metamath or another formal system. As a bonus, they can make the STARK a zk-STARK if they are worried about someone seeing the proof, reverse-engineering the bug, and making an exploit.

Making it a Reality with RISC Zero

How does all this work? In a succinct argument system like a zk-STARK, we have a prover and a verifier. The prover is meant to prove that the outcome of some computation gave some result, and the verifier checks the proof is correct. These two roles are analogous to the formal proof writer and the formal proof checker. In the case of this application, they coincide completely!

The prover first makes a Metamath file to prove their theorem. They then run a Metamath checker for the file as a computation inside the ZK proof system. This gives them a cryptographic output proof that they then pass to the verifier. In order to make this work, we need to be able to encode the Metamath proof checking process inside a zkVM.

This is where RISC Zero enters the picture. If we were doing this project in a zkEVM or using a circuit model, it would be necessary to write a Metamath checker in Solidity, Circom, or some other specialized language. But because RISC Zero has a zkVM for the RISC-V architecture, I can write my Metamath checker in a language like Rust. In fact, I don't even need to know Rust, because a Metamath checker written in Rust already exists, and I can just drop it into the RISC Zero VM and have it work! Even if formal methods aren't of interest to you at all, the implications for making STARK cryptography accessible are profound and exciting.

The Repository

You can find the GitHub repository for our project here. I exaggerate a little when I say you don't even have to know Rust to make a Rust program run in the RISC zero zkVM, but not by much. The repository is a fork of the much simpler risc0/rust-starter, which has the same basic structure. There are two Rust programs in each: a host that sends data to the zkVM through the add_input() method and a guest that runs inside the zkVM, receives input via the env::read() method, and commits to outputs using the env::commit() method.

When you're designing a zk-STARK, it's good to think about what data is part of the statement and what is part of the witness. The statement is the data that is known to the verifier. In the case of our Metamath verifier, it includes the statement of the theorem that is being proven. It also includes the list of the axioms on which that theorem is based (otherwise, the prover might create their own axiom which says that the theorem is true). The witness is all the data that is known to the prover. In our case, this is simply an entire Metamath file, along with an indication of which of the theorems within that file is the one for which we are requesting the proof.

Our project has been able to run a variety of small Metamath programs, such as the one in the example above. Our primary bottleneck is the execution time within the VM; because producing a STARK proof requires memory proportional to the size of the execution trace, we can only check Metamath files of a few hundred lines before we hit the limit. The RISC Zero team has been working hard though, and I suspect we will soon have engineering solutions to remove this obstruction. From there, the only limit on the space of STARKed formal methods will be our own imaginations!

· 4 min read
Ash Schap

I am excited to announce that I have taken a new role as the Chief Strategy Officer of RISC Zero.

RISC Zero is focused on bringing the magic of Zero Knowledge Proofs to blockchains to unlock the potential of Web3 through scaling. We have built a ZKVM around the RISC-V instruction set architecture. This means the RISC Zero ZKVM can support any language that can be compiled to RISC-V, including Rust, C++, and (soon) Go. To our knowledge, this means it is the first and only ZKVM in the Web3 space that does not require developers to adopt a custom or unfamiliar language.

For those of you that know me, this may come as a surprise. I have been an Ethereum maximalist for a long time. I started my crypto journey doing business development for MakerDao, and my most recent role was Head of Growth for Uniswap, the largest and most widely used Dapp on Ethereum.

I love Ethereum. I am so proud of what we, as a community, have built. Today it is possible for users around the world to access powerful financial infrastructure in a decentralized, permissionless way. The NFT space empowers creators every day to experiment with new forms of art and new ways of engaging with their fans and collectors. There is so much that is happening on Ethereum that is groundbreaking, cutting edge and truly inspiring to watch.

But Ethereum has not yet scaled. And that has been painful at times. I started my career in crypto so optimistic that we could bring the power of smart contracts to everyone on earth, lowering the barrier to entry to Web3 and providing cheap, reliable, safe financial tools to everyone in the world who may have a need for them. But today, only the wealthy can afford to interact with Ethereum Layer 1. I didn't help build MakerDao and Uniswap so that only whales could use it. So as I took time off after leaving Uniswap last September, I started to look around and really dig into the technical forefront of the scaling question.

Layer 2 solutions like Optimism are paving the way here, and I am very excited to see the application layer maturing as those solutions gain adoption. But as I learned more about how Layer 2 works, I realized that we will never be able to build sophisticated web applications of today's Web2 without a substantial increase in throughput. Existing Layer 2 solutions will allow Web3 to grow faster and allow for the creation of more advanced applications that are far more affordable for users. But there is a cap on that throughput. As the complexity of dapps increase, that throughput will begin to max out. The only way to realistically scale blockchains is to abandon linear state and begin to experiment with parallel execution.

Zero-Knowledge Proofs can take us one step closer to that reality. My team at RISC Zero is working on a lot of clever math that makes me extremely optimistic that we can reliably scale in a parallel way without sacrificing decentralization or security, while also adding optionality for private transactions.

Our first product will be a Devnet built using Tendermint consensus, the goal being to get something into the hands of developers quickly so that we can better understand what challenges they face in building with ZKPs. Longer term, we have a lot of options on how we will deploy our products. My personal hope is that we can create an interoperable blockchain that demonstrates the power of this technology while also deploying as a Layer 2 on top of several other L1s including Ethereum and maybe even Bitcoin. Our plan is to open source everything, so my hope is that this tech will be a resource for any blockchain that needs it.

I have often said that I think I will probably spend the rest of my career trying to make the dream of global borderless money a reality. Right now this feels like the next step in that journey. I am excited to be going deeper in the stack! The magic of ZKPs has totally captured my imagination, and I searched the industry really thoroughly for the team with the right set of technical skills and the right set of ideological objectives to take this tech to the next level. RISC Zero is that team, and I could not be more thrilled to be starting this new chapter.

If you are interested in joining us, check out our careers page here:

If you want to learn more about how Zero Knowlege works, here are some resources:

And for more on RISC Zero, check these out:

· 2 min read

We're excited to share a new kind of computing platform with the world – one that can't lie about anything it does. RISC Zero enables any developer to build zero-knowledge proofs that can be executed and verified on any modern computer in the languages they are most familiar with.

RISC Zero is the first fully free-and-open-source general-purpose ZK computing platform. Our prover and verifier are available under the Apache2 license, along with several simple but powerful examples of this technology written in Rust and C++.

General Purpose Verifiable Computing

RISC Zero is a fully-compliant software implementation of the RISC-V microarchitecture (RV32IM). The RISC-V ISA is implemented as a set of polynomial constraints inside a zk-STARK based proving system.

So RISC Zero is what, exactly?

RISC Zero is zero-knowledge virtual machine that can run on any platform. It's a virtual microcontroller / co-processor that produces receipts for every program it runs, kind of like a secure virtual Adruino. These receipts can be verified at anytime by anyone in milliseconds – even if you don't trust the party that executed the code.

How is it different from other ZK projects?

Our initial release of RISC Zero is different in a few critical ways:

  1. Both the prover and verifier are fully open. This means programs can be proven locally, preserving privacy and information hiding.
  2. The verifier has implementations in C++ and Rust. As such is can easily run on many different platforms and on many different blockchains.
  3. Because our ZKVM supports a standard ISA and looks like a normal von-Neumann machine, it is both familiar to a broad set of developers and can be programmed using standard languages.

What's Next?

We are working on several exciting projects internally that will be announced, explained, and released over the coming months. Please join our mailing list, Discord or follow us on Twitter