luzimu nielsr HF Staff commited on
Commit
cd520ee
·
verified ·
1 Parent(s): 9cf114d

Improve model card: Add code-generation tag and sample usage (#2)

Browse files

- Improve model card: Add code-generation tag and sample usage (10518e39a8f4ce777b2638ffbf616c3f4dadac24)


Co-authored-by: Niels Rogge <nielsr@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +37 -1
README.md CHANGED
@@ -5,11 +5,13 @@ datasets:
5
  - luzimu/WebGen-Bench
6
  language:
7
  - en
 
8
  license: mit
9
  metrics:
10
  - accuracy
11
  pipeline_tag: text-generation
12
- library_name: transformers
 
13
  ---
14
 
15
  # WebGen-LM
@@ -30,6 +32,40 @@ The WebGen-LM family of models are as follows:
30
 
31
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64b0bfef2f2f9c345b87e673/ADt1JdvKw-IZ_xnS17adL.png)
32
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
33
 
34
  ## Citation
35
 
 
5
  - luzimu/WebGen-Bench
6
  language:
7
  - en
8
+ library_name: transformers
9
  license: mit
10
  metrics:
11
  - accuracy
12
  pipeline_tag: text-generation
13
+ tags:
14
+ - code-generation
15
  ---
16
 
17
  # WebGen-LM
 
32
 
33
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64b0bfef2f2f9c345b87e673/ADt1JdvKw-IZ_xnS17adL.png)
34
 
35
+ ## Sample Usage
36
+
37
+ You can use this model with the Hugging Face `transformers` library.
38
+
39
+ ```python
40
+ from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
41
+
42
+ model_id = "luzimu/WebGen-LM-7B" # This model card refers to WebGen-LM-7B
43
+
44
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
45
+ model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto")
46
+
47
+ # Example for website generation
48
+ user_prompt = "Generate a simple HTML page with a heading 'Hello, World!' and a paragraph of lorem ipsum text."
49
+ messages = [
50
+ {"role": "user", "content": user_prompt}
51
+ ]
52
+
53
+ # Apply chat template for instruction-following format
54
+ text_input = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
55
+
56
+ # Generate output
57
+ model_inputs = tokenizer(text_input, return_tensors="pt").to(model.device)
58
+ generated_ids = model.generate(model_inputs.input_ids, max_new_tokens=500, do_sample=True, temperature=0.01, top_k=50, top_p=0.95)
59
+
60
+ # Decode and print the generated code
61
+ generated_text = tokenizer.decode(generated_ids[0], skip_special_tokens=True)
62
+ print(generated_text)
63
+
64
+ # Example using Hugging Face pipeline for simpler inference
65
+ generator = pipeline("text-generation", model=model, tokenizer=tokenizer)
66
+ result = generator(user_prompt, max_new_tokens=500, do_sample=True, temperature=0.01, top_k=50, top_p=0.95)
67
+ print(result[0]['generated_text'])
68
+ ```
69
 
70
  ## Citation
71