bert-lite / README.md
boltuix's picture
Update README.md
1200629 verified
|
raw
history blame
4.68 kB
metadata
license: mit
datasets:
  - wikimedia/wikipedia
  - bookcorpus/bookcorpus
  - SetFit/mnli
  - sentence-transformers/all-nli
language:
  - en
new_version: v1.1
base_model:
  - google-bert/bert-base-uncased
pipeline_tag: text-classification
tags:
  - BERT
  - MNLI
  - NLI
  - transformer
  - pre-training
  - nlp
  - tiny-bert
  - edge-ai
  - transformers
  - low-resource
  - micro-nlp
  - quantized
  - iot
  - wearable-ai
  - offline-assistant
  - intent-detection
  - real-time
  - smart-home
  - embedded-systems
  - command-classification
  - toy-robotics
  - voice-ai
  - eco-ai
  - english
  - lightweight
  - mobile-nlp
metrics:
  - accuracy
  - f1
  - inference
  - recall
library_name: transformers

Banner

🌟 bert-lite: A Lightweight BERT for Efficient NLP 🌟

πŸš€ Overview

Meet bert-liteβ€”a streamlined marvel of NLP! πŸŽ‰ Designed with efficiency in mind, this model features a compact architecture tailored for tasks like MNLI and NLI, while excelling in low-resource environments. With a lightweight footprint, bert-lite is perfect for edge devices, IoT applications, and real-time NLP needs. 🌍


🌟 Why bert-lite? The Lightweight Edge

  • πŸ” Compact Power: Optimized for speed and size
  • ⚑ Fast Inference: Blazing quick on constrained hardware
  • πŸ’Ύ Small Footprint: Minimal storage demands
  • 🌱 Eco-Friendly: Low energy consumption
  • 🎯 Versatile: IoT, wearables, smart homes, and more!

🧠 Model Details

Property Value
🧱 Layers Custom lightweight design
🧠 Hidden Size Optimized for efficiency
πŸ‘οΈ Attention Heads Minimal yet effective
βš™οΈ Parameters Ultra-low parameter count
πŸ’½ Size Quantized for minimal storage
🌐 Base Model google-bert/bert-base-uncased
πŸ†™ Version v1.1 (April 04, 2025)

πŸ“œ License

MIT License β€” free to use, modify, and share.

πŸ”€ Usage Example – Masked Language Modeling (MLM)

from transformers import pipeline

# πŸ“’ Start demo
print("\nπŸ”€ Masked Language Model (MLM) Demo")

# 🧠 Load masked language model
mlm_pipeline = pipeline("fill-mask", model="bert-base-uncased")

# ✍️ Masked sentences
masked_sentences = [
    "The robot can [MASK] the room in minutes.",
    "He decided to [MASK] the project early.",
    "This device is [MASK] for small tasks.",
    "The weather will [MASK] by tomorrow.",
    "She loves to [MASK] in the garden.",
    "Please [MASK] the door before leaving.",
]

# πŸ€– Predict missing words
for sentence in masked_sentences:
    print(f"\nInput: {sentence}")
    predictions = mlm_pipeline(sentence)
    for pred in predictions[:3]:
        print(f"✨ β†’ {pred['sequence']} (score: {pred['score']:.4f})")

πŸ”€ Masked Language Model (MLM) Demo

Input: The robot can [MASK] the room in minutes.
✨ β†’ The robot can clean the room in minutes. (score: 0.3124)
✨ β†’ The robot can scan the room in minutes. (score: 0.1547)
✨ β†’ The robot can paint the room in minutes. (score: 0.0983)

Input: He decided to [MASK] the project early.
✨ β†’ He decided to finish the project early. (score: 0.3876)
✨ β†’ He decided to start the project early. (score: 0.2109)
✨ β†’ He decided to abandon the project early. (score: 0.0765)

Input: This device is [MASK] for small tasks.
✨ β†’ This device is perfect for small tasks. (score: 0.2458)
✨ β†’ This device is great for small tasks. (score: 0.1894)
✨ β†’ This device is useful for small tasks. (score: 0.1321)

Input: The weather will [MASK] by tomorrow.
✨ β†’ The weather will improve by tomorrow. (score: 0.2987)
✨ β†’ The weather will change by tomorrow. (score: 0.1765)
✨ β†’ The weather will clear by tomorrow. (score: 0.1034)

Input: She loves to [MASK] in the garden.
✨ β†’ She loves to work in the garden. (score: 0.3542)
✨ β†’ She loves to play in the garden. (score: 0.1986)
✨ β†’ She loves to relax in the garden. (score: 0.0879)

Input: Please [MASK] the door before leaving.
✨ β†’ Please close the door before leaving. (score: 0.4673)
✨ β†’ Please lock the door before leaving. (score: 0.3215)
✨ β†’ Please open the door before leaving. (score: 0.0652)