FastAPI is the engine under the hood of VeriFact’s backend – so understanding FastAPI gives you a clear picture of how your fact-checking service works, and how to run it yourself.
Below is a complete article you can use (or adapt) for your site.
What is FastAPI?
FastAPI is a modern, high-performance Python web framework for building APIs.
Key characteristics:
- ASGI-based: Built on top of Starlette (for networking) and Pydantic (for data validation).
- Type-hint driven: You declare request and response models with Python type hints; FastAPI uses those to:
- Validate data automatically
- Generate OpenAPI (Swagger) documentation
- Provide better editor/IDE support
- High performance: Comparable to Node.js and Go for many workloads, especially IO-bound services.
- Automatic docs: You get
/docs(Swagger UI) and/redoc(ReDoc) “for free” from your route definitions.
For VeriFact, FastAPI is the natural fit: it exposes a /check endpoint that accepts claims, runs retrieval + verification, and returns structured JSON that WordPress and other clients can call.
How FastAPI Works (Under the Hood)
At a high level, FastAPI:
- Defines an ASGI app
from fastapi import FastAPI app = FastAPI()Thisappobject is what Uvicorn (or another ASGI server) runs. - Uses decorators to define routes (“path operations”)
@app.get("/health") def health(): return {"status": "ok"} - Uses Pydantic models for data validation & serialization
from pydantic import BaseModel class CheckRequest(BaseModel): claim: str depth: int | None = 3 class CheckResponse(BaseModel): stance: str confidence: float evidence: list[str] - Ties it together for automatic docs & OpenAPI
@app.post("/check", response_model=CheckResponse) async def check(req: CheckRequest): # Run your logic here; in VeriFact’s case: # 1) retrieve evidence, 2) run NLI, 3) build response return CheckResponse( stance="SUPPORTED", confidence=0.98, evidence=["Example evidence snippet"] )
From these pieces, FastAPI automatically:
- Validates incoming JSON against
CheckRequest - Serializes responses as
CheckResponse - Documents your API at
/docsand/redoc
Installing FastAPI
1. Prerequisites
- Python 3.9+ (VeriFact targets 3.11+ – that’s a good baseline)
- A virtual environment is recommended so dependencies don’t conflict with other projects.
2. Create a project and virtual environment
mkdir verifact-backend
cd verifact-backend
python -m venv .venv
source .venv/bin/activate # Windows: .venv\Scripts\activate
3. Install FastAPI and Uvicorn
pip install fastapi "uvicorn[standard]"
Optional (but common) dependencies for a VeriFact-like stack:
pip install pydantic[dotenv] sentence-transformers faiss-cpu wikipedia-api
Minimal FastAPI App (Step-by-Step)
Create main.py:
from fastapi import FastAPI
from pydantic import BaseModel
app = FastAPI(
title="VeriFact Fact-Checking API",
description="FastAPI backend for fact-checking claims using external evidence.",
version="2.0.7",
)
class CheckRequest(BaseModel):
claim: str
depth: int | None = 3
class CheckResponse(BaseModel):
stance: str
confidence: float
evidence: list[str]
@app.get("/health")
async def health():
return {"status": "ok"}
@app.post("/check", response_model=CheckResponse)
async def check(req: CheckRequest):
# TODO: plug in VeriFact logic (retrieval + NLI)
return CheckResponse(
stance="NEEDS_EVIDENCE",
confidence=0.0,
evidence=["Backend is wired; verification logic not yet implemented."],
)
Run it in development mode:
uvicorn main:app --reload --host 127.0.0.1 --port 8081
Now you can:
- Visit
http://127.0.0.1:8081/docs→ interactive Swagger UI - Visit
http://127.0.0.1:8081/redoc→ ReDoc
Configuring FastAPI for Real Use
For a production-ish setup (like VeriFact), you’ll want some structure.
1. Structured settings with Pydantic
Create settings.py:
from pydantic import BaseSettings, AnyHttpUrl
class Settings(BaseSettings):
app_name: str = "VeriFact Fact-Checking API"
debug: bool = False
# example connection settings
wikipedia_enabled: bool = True
archive_enabled: bool = False
# optional external APIs
openai_api_key: str | None = None
class Config:
env_file = ".env"
settings = Settings()
Use it in main.py:
from fastapi import FastAPI
from .settings import settings
app = FastAPI(
title=settings.app_name,
debug=settings.debug,
)
And define .env:
APP_NAME="MyBrand Fact-Checking API"
DEBUG=False
WIKIPEDIA_ENABLED=True
ARCHIVE_ENABLED=True
OPENAI_API_KEY=sk-...
2. Running in production
For production, you typically:
- Run
uvicornwithout--reload - Bind to localhost
- Put a reverse proxy (Nginx or Apache) in front
Example:
uvicorn main:app --host 127.0.0.1 --port 8081
Then configure Nginx (example):
server {
server_name api.yourbrand.com;
location /factcheck/ {
proxy_pass http://127.0.0.1:8081/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
}
listen 80;
# add TLS / Let's Encrypt for HTTPS in a real deployment
}
That gives you a public endpoint like:
https://api.yourbrand.com/factcheck/check
Self-Hosting a VeriFact-Style FastAPI Server
The VeriFact backend is essentially a more advanced version of the example above:
- It exposes
/healthand/check - It loads models (embeddings + NLI) at startup
- It retrieves evidence from sources like Wikipedia / Archive.org
- It returns structured JSON (stance, confidence, evidence)
To self-host something equivalent:
1. Server requirements
Roughly:
- Linux VM or server
- Python 3.11+
- 2+ GB RAM (for models)
- 5+ GB disk (for code, models, logs)
- Outbound access to:
- Wikipedia
- Hugging Face model hosting
- Any optional providers (OpenAI, Serper, etc.)
2. Install the backend
On your server:
git clone <your-verifact-backend-fork> verifact-backend
cd verifact-backend
python -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
Run initial test:
uvicorn server:app --host 127.0.0.1 --port 8081
curl http://127.0.0.1:8081/health
You should get {"status": "ok"} or similar.
3. Run as a service
Create a verifact.service for systemd (example):
[Unit]
Description=VeriFact FastAPI Service
After=network.target
[Service]
User=www-data
WorkingDirectory=/opt/verifact-backend
Environment="PYTHONUNBUFFERED=1"
ExecStart=/opt/verifact-backend/.venv/bin/uvicorn server:app --host 127.0.0.1 --port 8081
Restart=always
[Install]
WantedBy=multi-user.target
Enable it:
sudo cp verifact.service /etc/systemd/system/
sudo systemctl enable verifact
sudo systemctl start verifact
sudo systemctl status verifact
4. Add a reverse proxy
Use Nginx or Apache to expose it as:
https://your-domain.com/verifact
Once that’s done, your public API base URL is something like:
https://your-domain.com/verifact/(root)https://your-domain.com/verifact/check(fact-checking endpoint)https://your-domain.com/verifact/health(health check)
Connecting VeriFact (WordPress Plugin) to Your Self-Hosted FastAPI
Now the fun part: making your WordPress VeriFact plugin talk to your own server instead of a shared endpoint.
1. Configure the API base URL in WordPress
Inside your WordPress admin (with the VeriFact plugin installed and activated):
- Go to the VeriFact settings area (e.g., API Management or equivalent settings page).
- Set the API Base URL to your self-hosted endpoint:
https://your-domain.com/verifact/ - Save settings.
2. Test connectivity
Most recent versions of the plugin include a way to test the connection:
- Click Test Connection / Health Check in the plugin’s API settings.
- The plugin should call
GET https://your-domain.com/verifact/healthand display the result. - If it fails:
- Check your reverse proxy config
- Check TLS/HTTPS
- Check that the service is listening on localhost and reachable
3. Run a test fact-check from WordPress
- Add a page with the shortcode:
[verifact] - View the page on the front-end.
- Enter a simple claim like:
“The Eiffel Tower is located in Paris.” - Submit and wait for the result.
If everything is configured correctly:
- The WordPress plugin will call its internal REST route (e.g.
verifact/v1/check). - That route will proxy to
https://your-domain.com/verifact/check. - The FastAPI backend will run retrieval + NLI.
- The response is logged in WordPress (
wp_verifact_logs) and displayed to the user.
Best Uses for FastAPI in This Context
FastAPI shines in several scenarios that match VeriFact’s needs:
- High-throughput API for fact-checking
Handling many concurrent/checkcalls from WordPress and other clients. - ML / NLP inference services
Loading NLI models and embedding models once at startup, then reusing them across requests. - Microservices and multi-source retrieval
Combining Wikipedia, Archive.org, (eventually Grokopedia), OpenAI, and other APIs into a single, clean JSON interface. - Internal tools
Custom dashboards, internal QA/testing tools, or additional admin UIs can directly call your FastAPI endpoints.
Best Practices for Depth and Speed
VeriFact’s mission is both deep (thorough verification) and fast (low response times). FastAPI gives you tools for both, but you need to use them well.
For Depth (Quality of Verification)
- Retrieve multiple evidence snippets
Don’t rely on a single article or snippet. Pull the top k results (e.g. 3–10) and let your NLI model weigh them. - Use configurable “depth” per request
Let clients (like your WordPress plugin) request different depths:- Quick checks (low depth) for editors on a deadline
- Deep dives (higher depth) for investigations or research
- Multi-source approaches
Combine:- Wikipedia (clean, structured general knowledge)
- Archive.org (historical and original pages)
- Future sources (Grokopedia, specialized databases)
- Log everything important
Keep history of:- Claims checked
- Evidence URLs
- Stances and confidence
- Runtimes and errors
This helps you debug and improve the system over time.
For Speed (Performance & Latency)
- Load models once at startup
Make sure your embedding and NLI models are loaded when the app starts, not per request:from fastapi import FastAPI app = FastAPI() # Global model instances embedding_model = load_embedding_model() nli_model = load_nli_model() - Use async & parallel I/O
When calling Wikipedia, Archive.org, or search APIs, useasyncand make concurrent requests where safe. Network I/O is often the bottleneck. - Cache frequent results
Cache:- Embeddings for popular claims
- Responses for exact same claims
Use in-memory caching or a dedicated cache (Redis) if you expect heavy repeat traffic.
- Right-size your models
Larger models = deeper reasoning but slower. For many use cases:- Small or medium embedding model (e.g., MiniLM family)
- Medium NLI model
give a good balance for real-time use.
- Control request size and complexity
Limit:- Max text length for claims
- Max number of evidence items per request
This prevents “pathological” requests from hurting everyone else’s performance.
- Monitor and profile
Track:- Average and p95 latency
- Time spent in retrieval vs. NLI vs. external APIs
Optimize where it matters most (often I/O and too-deep evidence retrieval).
Wrapping Up
FastAPI is the backbone of VeriFact’s backend: a fast, typed, and well-documented API layer that makes it easy to plug your fact-checking engine into WordPress or any other client.
By:
- Installing FastAPI correctly,
- Configuring it with proper settings,
- Deploying it behind a reverse proxy, and
- Pointing your VeriFact WordPress plugin at your own endpoint,
you get a fully self-hosted, self-branded fact-checking service with fine-grained control over depth, speed, and sources.