KFUPM Logo

FraudX – Receipt Forgery Detection

KFUPM – ICS 619 | Rayan Alsubhi

New analysis Dashboard Queue 5
Demo: Multi-region digit edit (CPI) · expected High Risk · ~99%
← Analyse another receipt Open queue
!
Verdict — High risk

Likely forged

Multiple regions show strong evidence of editing.

Patch CNN High risk forgery confidence 99% VAT QR No QR no ZATCA QR detected
99%
CNN forgery confidence
No decision recorded.
Export audit PDF
⚠ Duplicate: exact match of a file already analysed in this session.

Saudi VAT QR check — no QR

No ZATCA VAT QR detected.

Key findings

Document — original & heatmap
Original
Original Document 🔍 Click to zoom
Suspicion heatmap
Base Document CNN Heatmap 🔍 Click to zoom

Suspicious regions

The top 5 highest-confidence regions out of all 40 that exceeded the model's threshold. Hover or click any thumbnail to highlight that 128 × 128 region on the document on the left.

1
Suspicious region 1 thumbnail
at (704, 1792) 100%
CPI Total/payment
2
Suspicious region 2 thumbnail
at (512, 1984) 100%
PIX Product
3
Suspicious region 3 thumbnail
at (768, 1792) 100%
CUT Total/payment
4
Suspicious region 4 thumbnail
at (448, 1984) 100%
CPI Product
5
Suspicious region 5 thumbnail
at (0, 1984) 97%
CUT Total/payment
Region breakdown table (5 rows)
# Coords Score Edit type Field affected
1 (704, 1792) 100% CPI 51% Total/payment 79%
2 (512, 1984) 100% PIX 61% Product 45% (low conf)
3 (768, 1792) 100% CUT 46% (low conf) Total/payment 91%
4 (448, 1984) 100% CPI 36% (low conf) Product 59%
5 (0, 1984) 97% CUT 63% Total/payment 50%
Methodology, audit metadata & technical details
File: X51005568866.png SHA-256: af9046c2d807… Size: 936 × 2753 px Analysed: 2026-05-11 23:28:40 Model: FraudX v2-multi · ResNet-18 · ep13 · thr=0.08
Patch precision
92.25% vs paper OH-JPEG 79.41%
Patch F1 / AUC
91.79 / 0.97
Image-level F1
29.66 vs paper ChatGPT-relaxed 28.39
Patches scored (this image)
602 · 426 text-bearing
Regions ≥ threshold
40 · thr 0.08
Top-region score
0.993

Approach. ResNet-18 patch classifier (128×128, stride 64) trained on FINDIT2 (Tornes et al., ICDAR 2023). Two auxiliary heads classify the modification technique and the document field; the binary backbone is frozen so the headline patch precision (92.25%) is preserved bit-for-bit while adding explainability. Image-level fraud score is the top-k mean of patch probabilities over text-bearing regions only (edge density ≥ 0.02) — keeps blank regions from poisoning the score.

Class accuracies on the FINDIT2 test set.

  • Edit type: CPI 73% · CUT 100% · IMI 38% · PIX 55% · Other 14%
  • Field: Total/payment 67% · Metadata 47% · Product 27% · Company 24%

Original technical findings (raw).

  • Patch CNN flagged 40 of 426 text-rich patches (top score 1.00).
  • Top 5 suspicious regions clustered around image coordinates (704,1792), (512,1984), (768,1792), (448,1984), (0,1984).
  • Predicted modification mix across top regions: 2× CPI, 2× CUT, 1× PIX.
  • Predicted entity types: 3× Total/payment, 2× Product.
  • Document is an EXACT duplicate of a file already analysed in this session (resubmission).