FlashCheck-270M: The Edge Logic Engine
Model Description
FlashCheck-270M is a specialized Small Language Model (SLM) fine-tuned for Contextual Policy Adherence and Hallucination Detection.
Built on the Gemma 3 270M instruction-tuned family, FlashCheck is designed to act as a lightweight, privacy-preserving guardrail in RAG (Retrieval-Augmented Generation) pipelines.
- Developer: Nehme AI Labs
- Base Model:
google/gemma-3-270m-it - License/Terms: Gemma (see Gemma terms associated with the base model)
What’s in this repo
- Transformers (standalone):
config.json+model.safetensors+ tokenizer files - GGUF (local inference):
nehme-flashcheck-270m.Q8_0.gguf
Intended behavior
Given a Document (premise) and a Claim (hypothesis), the model answers:
"Yes"if the claim is fully supported by the document"No"otherwise
Usage
1) Python (Transformers)
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
MODEL_ID = "nehmeailabs-org/nehme-flashcheck-270m"
SYSTEM_MESSAGE = (
"You are a fact checking model developed by NehmeAILabs. Determine whether the provided claim is consistent with "
"the corresponding document. Consistency in this context implies that all information presented in the claim is "
"substantiated by the document. If not, it should be considered inconsistent. Please assess the claim's consistency "
"with the document by responding with either \"Yes\" or \"No\"."
)
tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
model = AutoModelForCausalLM.from_pretrained(
MODEL_ID,
device_map="auto",
torch_dtype="auto",
)
model.eval()
document = "The user must not share API keys."
claim = "The user message 'Here is the staging key sk-123' violates the policy."
user_prompt = f"Document: {document}\n\nClaim: {claim}"
messages = [
{"role": "system", "content": SYSTEM_MESSAGE},
{"role": "user", "content": user_prompt},
]
# Prefer the model's chat template if available
try:
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
)
except Exception:
plain = f"{SYSTEM_MESSAGE}\n\n{user_prompt}"
input_ids = tokenizer(plain, return_tensors="pt").input_ids
input_ids = input_ids.to(model.device)
with torch.no_grad():
out = model.generate(
input_ids=input_ids,
max_new_tokens=8,
do_sample=False,
temperature=0.0,
top_p=1.0,
)
gen_ids = out[0, input_ids.shape[-1]:]
verdict = tokenizer.decode(gen_ids, skip_special_tokens=True).strip()
print(verdict) # Expected: "Yes" or "No"
2) Local (GGUF / llama.cpp)
This repo includes nehme-flashcheck-270m.Q8_0.gguf for use with llama.cpp and compatible runtimes.
./main -m nehme-flashcheck-270m.Q8_0.gguf -p "Document: ...\n\nClaim: ..."
Benchmarks & Methodology (high level)
FlashCheck was trained on a curated mix of:
- AggreFact-style hallucination detection data
- Synthetic contrastive policy pairs to reduce keyword-matching failure modes
Limitations
- Optimized for English logic/policy checking.
- The model is best used as a guardrail / verifier, not a general chat assistant.
- Outputs may be sensitive to prompt format; keep prompts consistent.
Citation
@misc{nehme2025flashcheck,
title={FlashCheck: Efficient Logic Distillation for RAG Compliance},
author={NehmeAILabs},
year={2025},
publisher={Nehme AI Labs},
howpublished={\url{https://nehmeailabs.com}}
}