qaihm-bot commited on
Commit
1c6ccd0
·
verified ·
1 Parent(s): 805adad

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +9 -9
README.md CHANGED
@@ -35,8 +35,8 @@ More details on model performance across various devices, can be found
35
 
36
  | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
37
  | ---|---|---|---|---|---|---|---|
38
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 0.921 ms | 0 - 2 MB | FP16 | NPU | [Shufflenet-v2.tflite](https://huggingface.co/qualcomm/Shufflenet-v2/blob/main/Shufflenet-v2.tflite)
39
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 0.321 ms | 1 - 4 MB | FP16 | NPU | [Shufflenet-v2.so](https://huggingface.co/qualcomm/Shufflenet-v2/blob/main/Shufflenet-v2.so)
40
 
41
 
42
  ## Installation
@@ -96,16 +96,16 @@ python -m qai_hub_models.models.shufflenet_v2.export
96
  ```
97
  Profile Job summary of Shufflenet-v2
98
  --------------------------------------------------
99
- Device: Samsung Galaxy S23 Ultra (13)
100
- Estimated Inference Time: 0.92 ms
101
- Estimated Peak Memory Range: 0.02-2.22 MB
102
  Compute Units: NPU (202) | Total (202)
103
 
104
  Profile Job summary of Shufflenet-v2
105
  --------------------------------------------------
106
- Device: Samsung Galaxy S23 Ultra (13)
107
- Estimated Inference Time: 0.32 ms
108
- Estimated Peak Memory Range: 0.59-3.99 MB
109
  Compute Units: NPU (157) | Total (157)
110
 
111
 
@@ -225,7 +225,7 @@ Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
225
  ## License
226
  - The license for the original implementation of Shufflenet-v2 can be found
227
  [here](https://github.com/pytorch/vision/blob/main/LICENSE).
228
- - The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf).
229
 
230
  ## References
231
  * [ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design](https://arxiv.org/abs/1807.11164)
 
35
 
36
  | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
37
  | ---|---|---|---|---|---|---|---|
38
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 0.919 ms | 0 - 2 MB | FP16 | NPU | [Shufflenet-v2.tflite](https://huggingface.co/qualcomm/Shufflenet-v2/blob/main/Shufflenet-v2.tflite)
39
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 0.322 ms | 1 - 4 MB | FP16 | NPU | [Shufflenet-v2.so](https://huggingface.co/qualcomm/Shufflenet-v2/blob/main/Shufflenet-v2.so)
40
 
41
 
42
  ## Installation
 
96
  ```
97
  Profile Job summary of Shufflenet-v2
98
  --------------------------------------------------
99
+ Device: Samsung Galaxy S24 (14)
100
+ Estimated Inference Time: 0.59 ms
101
+ Estimated Peak Memory Range: 0.02-31.31 MB
102
  Compute Units: NPU (202) | Total (202)
103
 
104
  Profile Job summary of Shufflenet-v2
105
  --------------------------------------------------
106
+ Device: Samsung Galaxy S24 (14)
107
+ Estimated Inference Time: 0.23 ms
108
+ Estimated Peak Memory Range: 0.01-46.20 MB
109
  Compute Units: NPU (157) | Total (157)
110
 
111
 
 
225
  ## License
226
  - The license for the original implementation of Shufflenet-v2 can be found
227
  [here](https://github.com/pytorch/vision/blob/main/LICENSE).
228
+ - The license for the compiled assets for on-device deployment can be found [here]({deploy_license_url})
229
 
230
  ## References
231
  * [ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design](https://arxiv.org/abs/1807.11164)