Cover generated by FreePik
Making sure that images shared online are authentic and trustworthy is a big challenge. But let's be real: most images need some tweaking before they go public. Zero-knowledge proofs (ZKPs) can help by verifying edited images without needing to reveal the original. The problem? ZKPs are often costly, especially when it comes to prover complexity and proof size. That's where VIMz comes in. VIMz is a framework designed to prove the authenticity of high-resolution images efficiently using folding-based zkSNARKs (powered by the Nova proving system). With VIMz, we can verify that both the original and edited images are legit, along with the correctness of the transformations, all without revealing any intermediate versions—only the final image is exposed. Plus, VIMz keeps the identities of the original creator and subsequent editors private while proving the final image's authenticity, making it ideal for privacy-preserving, trustless marketplaces compatible with C2PA standards. It's efficient enough to handle 8K images on a mid-range laptop with minimal memory and proof size, offering fast verification and parallel processing capabilities. We formally prove the security of VIMz on our recent paper.
To address the prover complexity in "proofs of media provenance," we utilized the efficiency of folding schemes, specifically the Nova protocol. More precisely, we leverage Circom to define our folding steps in Nova via the Nova-Scotia frontend. To ensure everything works securely, we developed a new commitment scheme for images, processing them row by row. This approach allows us to map any image transformation to a "folding-friendly" version that can be proven in Nova while also proving the commitment of the witness, i.e., the original image. For more details on the exact protocol and the formal security analysis of VIMz, refer to our recent paper. With VIMz, we can verify both the original and edited images, as well as the correctness of the transformations, without revealing intermediate versions—only the final image is shown. Additionally, VIMz protects the identities of both the original creator and subsequent editors while proving the authenticity of the final image.
Our tests show that VIMz is fast and efficient on both the prover and verifier sides. For example, you can prove transformations on 8K (33MP) images using just a mid-range laptop, hitting a peak memory usage of 10 GB. Verification takes less than 1 second, and proof sizes come in at under 11 KB no matter the resolution. Plus, the low complexity of VIMz means you can prove multiple transformations in parallel, boosting performance by up to 3.5x on the same machine.
If you have used this repo to develop a research work or product, please cite our paper:
-
PETS conference Paper: [2025] S. Dziembowski, S. Ebrahimi, P. Hassanizadeh, VIMz: Private Proofs of Image Manipulation using Folding-based zkSNARKs
@inproceedings{vimz25pets, title = {VIMz: Private Proofs of Image Manipulation using Folding-based zkSNARKs}, author = {Dziembowski, Stefan and Ebrahimi, Shahriar and Hassanizadeh, Parisa}, booktitle = {Proceedings on Privacy Enhancing Technologies (PoPETs}, address = {Mountain View, CA}, month = {July}, year = 2025 }
Following table provides performance measurements of VIMz executed separately on two different devices (Core-i5 Laptop and a Ryzen 9 Server), while proving transformations on an HD resolution image. We have executed multiple runs using different input values and reported the average performance in the following table. However, we note that in all of our runs we observed very high consistency between the results. More detailed analysis are available in the paper:
Transformation | Related Circom Circuit for the steps in Nova | Mid-range Laptop (Key. Gen.) | Mid-range Laptop (Proving) | Server (Key. Gen.) | Server (Proving) | Peak Memory |
---|---|---|---|---|---|---|
Crop | optimized_crop_step_HD | 3.8 s | 187.1 s | 3.5 s | 133.0 s | 0.7~GB |
Resize | resize_step_HD | 11.5 s | 187.0 s | 6.6 s | 135.7 s | 2.5 GB |
Contrast | contrast_step_HD | 11.7 s | 479.4 s | 6.5 s | 371.7 s | 2.4 GB |
Grayscale | grayscale_step_HD | 8.2 s | 279.6 s | 3.7 s | 240.6 s | 1.3 GB |
Brightness | brightness_step_HD | 11.3 s | 474.0 s | 6.5 s | 372.5 s | 2.4 GB |
Sharpness | sharpness_step_HD | 11.8 s | 614.1 s | 6.8 s | 455.8 s | 2.8 GB |
Blur | blur_step_HD | 11.5 s | 555.3 s | 6.6 s | 406.0 s | 2.5 GB |
Note
We have two implementations of crop
, by default we run the optimizaed_crop
version in our benchmarks, which has a fix starting point for the crop. However, the other version of our crop
circuit supports arbitrary starting point that we refer to it in the paper as selective crop, which is far morecomplex than the optimized crop
circuit. You can find the circuits for each version below:
- Static/Fixed Crop:
circuits/optimized_crop_HD.circom
andcircuits/optimized_crop_4K.circom
- Selective Crop:
circuits/crop_HD.circom
andcircuits/crop_4K.circom
The repository is organized into four directories:
-
circuits: Contains the underlying ZK circuits of VIMz in
circom
language. -
contracts: Contains high-level Solidity smart contracts~(see Appendix F C2PA-Compatible Marketplace) that provide the infrastructure for a C2PA-compatible marketplace on EVM-based blockchains.
-
nova: Contains the main
cargo
-based package for building and installing VIMz usingnova
protocol. -
py_modules: Houses the Python interface (GUI) of VIMz, facilitating image editing and preparation of input files for the VIMz prover.
-
samples: Holds images in standard resolutions (e.g., HD, 4K) along with pre-built
JSON
files of supported edits to be fed into the VIMz prover.
- I-a) Node JS:
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.3/install.sh | bash
source ~/.bashrc
nvm install v16.20.0
Tip
in rare cases (miss-configured Linux distros), if you got an error stating that version "v16.20.0" was not found; following command might help:
export NVM_NODEJS_ORG_MIRROR=http://nodejs.org/dist
- I-b) snarkjs:
npm install -g snarkjs
- I-c) Rust:
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- --default-toolchain none -y
rustup default stable
- I-d) build-essentials:
sudo apt install gcc build-essential nlohmann-json3-dev libgmp3-dev nasm
- I-e) Circom:
git clone https://github.com/iden3/circom.git
cd circom
cargo build --release
cargo install --path circom
- Verify the installation:
circom --version
- I-f) Time:
sudo apt install time
Note
We have successfully executed benchmarks on multiple systems and expect there should be minimal sensitivity regarding spesific versions in the dependencies, however, we note one of our recent system configurations for the record:
- Ubuntu @ 22.04
- Circom @ 2.2.1
- snarkjs @ 0.7.5
- rustc @ 1.86.0-nightly
Once you have installed dependencies, you can proceed with setting up and running VIMz. To obtain the latest version of VIMz, head to directory of your choice and install VIMz using the following command:
- Clone:
git clone https://github.com/zero-savvy/vimz.git
- Head to the
nova
directory:cd vimz/nova
- build and install
vimz
usingcargo
:cargo build
cargo install --path .
- verify installation of
vimz
:vimz --help
- go to the circuits directory:
cd ../circuits
- build node modules:
npm install
- build ZK circuits using the provided script in this directory:
- Circuit-spesific build:
./build_circuits.sh grayscale_step_HD.circom
or./build_circuits.sh contrast_step_4K.circom
- Full build:
./build_circuits.sh
- Circuit-spesific build:
Note
If you only want to reproduce results, we suggest to only build a few circuits, because building all of the circuits can take some time! It's not that long, but why wait? :D
We've built the tools necessary for benchmarking using the samples provided in the samples
directory. To do this,
simply Go to the main directory of vimz repo and run any number of transformations as you prefer using the provided script:
./benchmark.sh <resolution> [list-of-transformations]
Important
Make sure that the circuit related to the benchmarking transformation must be already built (check II-b Building Circuits section).
Tip
Since the proof generation process can be time consuming, it is recommended to initially benchmark with only one transformation at a time (replicating the HD resolution results presented in Table 4 of the paper). Once these results are verified, you can proceed to run multiple transformations in parallel to replicate the results shown in Table 5.
Example 1: benchmarking a single transformation:
./benchmark.sh HD contrast
or
./benchmark.sh 4K blur
or
./benchmark.sh HD grayscale
Example 2: benchmarking parallel execution of multiple transformations:
./benchmark.sh HD contrast blur
or
./benchmark.sh 4K resize blur sharpness
Tip
Reproducing Parallel Experiments: You can easily reproduce the benchmarks reported in the Table 5 of the paper using the script. Just add the list of transformations according the entry you want to reproduce from Table 5. For instnce, in order to run the experiment ofr the Cn-Sh-Re
entry, you should run ./benchmark.sh HD contrast sharpness grayscale
.
Important
Sample output: When benchmarking only one transformation, the output will be visible in the stdout
. However, while benchmarking parallel execution of multiple transformations, the script generates a file (or multiple files, one per given transformation) with a .output
suffix in the same directory. These files contain the standard output of running the vimz
command directly, as shown in Figure below. Nonetheless, the output includes various performance metrics.
- The total proof generation time can be calculated as the sum of two numbers:
RecursiveSNARK creation
andCompressedSNARK::prove:
from the output. CompressedSNARK::verify:
represents the verification time.
Other than benchmarking, if you want to execute VIMz directly, it should be done using the following command. For more details of running VIMz, use vimz --help
:
vimz --function <FUNCTION>
--resolution <RESOLUTION> --input <FILE>
--circuit <R1CS FILE> --output <FILE>
--witnessgenerator <BINARY/WASM FILE>
We've provided a python GUI to apply the effects on the given images. You canfind it in py_modules
directory.
- When running it, a
tkinter
-based file picker will open to select the input image, which must be exactly in HD or 4K resolution. You can also use the sample images provided in thesamples
directory. - After this, the script will take a little time to digest the image and create propoer inputs for the Nova prover.
- Then the script will as to select the edit you wish to apply on the image:
Enter your command: 1) crop 2) resize ... 7) blur
, which can be done by inputting a number related to the transformation you want. For instance2
for applyingresize
. - Based on the selected effect, the script might ask for other configs, such as the ratio for the
contrast
orsharpness
. - The script then views the image and the applied effect in a new window. See the figure below as an example.
- After closing the window, the script will work on applying the transfomation and creating the final JSON file for your image eefect to be proven with VIMz. The JSON file should follow the exact structure as the sample JSON files available in the
samples/
directory.
GUI for the pythn_formatter.py
- We thank @iden3 for building the awesome Circom language and providing the CircomLib.
- This work currently heavily relies on Nova-Scotia's compiler for transforming Circom circuits to the ones compatible with Nova.
- The very early version of the project (solely based on Circom without NOVA) was inspired by image transformation proofs from @TrishaDatta's Circom circuits repository,
which were related to the medium post By Trisha Datta and Dan Boneh.
This work is licensed under Attribution-NonCommercial 4.0 International