--- title: Askveracity emoji: 📉 colorFrom: blue colorTo: pink sdk: streamlit sdk_version: 1.44.1 app_file: app.py pinned: false license: mit short_description: Fact-checking and misinformation detection tool. --- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference # AskVeracity: Fact Checking System A streamlined web application that analyzes claims to determine their truthfulness through evidence gathering and analysis. ## Overview This application uses an agentic AI approach to verify factual claims through a combination of NLP techniques and large language models. The AI agent: 1. Uses a ReAct (Reasoning + Acting) methodology to analyze claims 2. Dynamically gathers evidence from multiple sources (Wikipedia, News APIs, RSS feeds, fact-checking sites) 3. Intelligently decides which tools to use and in what order based on the claim's category 4. Classifies the truthfulness of claims using the collected evidence 5. Provides transparency into its reasoning process 6. Generates clear explanations for its verdict with confidence scores ## Features - **Claim Extraction**: Identifies and focuses on the primary factual claim - **Category Detection**: Determines the claim's category to optimize evidence retrieval - **Multi-source Evidence**: Gathers evidence from Wikipedia, news articles, academic sources, and fact-checking sites - **Semantic Analysis**: Analyzes evidence relevance using advanced NLP techniques - **Transparent Classification**: Provides clear verdicts with confidence scores - **Detailed Explanations**: Generates human-readable explanations for verdicts - **Interactive UI**: Easy-to-use Streamlit interface with evidence exploration ## Project Structure ``` askveracity/ │ ├── app.py # Main Streamlit application ├── agent.py # LangGraph agent implementation ├── config.py # Configuration and API keys ├── requirements.txt # Dependencies for the application ├── .streamlit/ # Streamlit configuration │ ├── config.toml # UI theme configuration │ └── secrets.toml.example # Example secrets file (do not commit actual secrets) ├── utils/ │ ├── __init__.py │ ├── api_utils.py # API rate limiting and error handling │ ├── performance.py # Performance tracking utilities │ └── models.py # Model initialization functions ├── modules/ │ ├── __init__.py │ ├── claim_extraction.py # Claim extraction functionality │ ├── evidence_retrieval.py # Evidence gathering from various sources │ ├── classification.py # Truth classification logic │ ├── explanation.py # Explanation generation │ ├── rss_feed.py # RSS feed evidence retrieval │ ├── semantic_analysis.py # Relevance analysis for evidence │ └── category_detection.py # Claim category detection ├── data/ │ └── source_credibility.json # Source credibility data └── tests/ ├── __init__.py └── test_claim_extraction.py # Unit tests for claim extraction ``` ## Setup and Installation ### Local Development 1. Clone this repository ``` git clone https://github.com/yourusername/askveracity.git cd askveracity ``` 2. Install the required dependencies: ``` pip install -r requirements.txt ``` 3. Set up your API keys: You have two options: **Option 1: Using Streamlit secrets (recommended for local development)** - Copy the example secrets file to create your own: ``` cp .streamlit/secrets.toml.example .streamlit/secrets.toml ``` - Edit `.streamlit/secrets.toml` and add your API keys: ```toml OPENAI_API_KEY = "your_openai_api_key" NEWS_API_KEY = "your_news_api_key" FACTCHECK_API_KEY = "your_factcheck_api_key" ``` **Option 2: Using environment variables** Create a `.env` file in the root directory with the following content: ``` OPENAI_API_KEY=your_openai_api_key NEWS_API_KEY=your_news_api_key FACTCHECK_API_KEY=your_factcheck_api_key ``` 4. When using environment variables, load them: At the start of your Python script or in your terminal: ```python # In Python from dotenv import load_dotenv load_dotenv() ``` Or in your terminal before running the app: ```bash # Unix/Linux/MacOS source .env # Windows # Install python-dotenv[cli] and run dotenv run streamlit run app.py ``` ### Running the Application Launch the Streamlit app by running: ``` streamlit run app.py ``` ### Deploying to Hugging Face Spaces 1. Fork this repository to your GitHub account 2. Create a new Space on Hugging Face: - Go to https://huggingface.co/spaces - Click "Create new Space" - Select "Streamlit" as the SDK - Choose "From GitHub" as the source - Connect to your GitHub repository 3. Add the required API keys as secrets: - Go to the "Settings" tab of your Space - Navigate to the "Repository secrets" section - Add the following secrets: - `OPENAI_API_KEY` - `NEWS_API_KEY` - `FACTCHECK_API_KEY` 4. Your Space will automatically deploy with the changes ## Rate Limiting and API Considerations The application implements intelligent rate limiting for API calls to: - Wikipedia - WikiData - News API - Google FactCheck Tools - RSS feeds The system includes exponential backoff for retries and optimized API usage to work within free API tiers. Rate limits can be configured in the `config.py` file. ## Best Practices for Claim Verification For optimal results with AskVeracity: - Keep claims short and precise - Include key details in your claim - Phrase claims as direct statements rather than questions - Be specific about who said what, when relevant ## Development Notes ### UI Differences Between Environments When developing locally versus deploying to Hugging Face Spaces, you may notice visual differences in certain UI elements: - **Button styling**: Buttons may appear in different colors (blue/purple locally vs. coral/orange on HF Spaces) - This is due to Hugging Face Spaces applying its own theme based on the `colorFrom` and `colorTo` values in the configuration These differences are cosmetic only and don't affect functionality. We've chosen to maintain the default Hugging Face styling for the deployed version. ## License This project is licensed under the [MIT License](./LICENSE), allowing free use, modification, and distribution with proper attribution.