File size: 4,993 Bytes
2eb64df
 
 
 
 
 
 
 
 
 
3e09e90
2eb64df
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a1ecd9b
34ece55
a1ecd9b
 
 
 
2eb64df
 
 
 
38c44a4
 
a1ecd9b
2eb64df
 
 
 
 
 
 
8d4c794
2eb64df
 
 
8d4c794
2eb64df
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e3fcc05
 
 
38c44a4
 
 
a1ecd9b
e3fcc05
 
 
2eb64df
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a1ecd9b
2eb64df
 
 
 
 
 
50c713c
2eb64df
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
---
library_name: pytorch
license: other
pipeline_tag: image-to-image
tags:
- quantized
- android

---

![](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/sesr_m5_quantized/web-assets/model_demo.png)

# SESR-M5-Quantized: Optimized for Mobile Deployment
## Upscale images in real time

SESR M5 performs efficient on-device upscaling of images.

This model is an implementation of SESR-M5-Quantized found [here](https://github.com/quic/aimet-model-zoo/tree/develop/aimet_zoo_torch/sesr).
This repository provides scripts to run SESR-M5-Quantized on Qualcomm® devices.
More details on model performance across various devices, can be found
[here](https://aihub.qualcomm.com/models/sesr_m5_quantized).


### Model Details

- **Model Type:** Super resolution
- **Model Stats:**
  - Model checkpoint: sesr_m5_4x_checkpoint
  - Input resolution: 128x128
  - Number of parameters: 338K
  - Model size: 389 KB




| Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
| ---|---|---|---|---|---|---|---|
| Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 1.383 ms | 3 - 5 MB | INT8 | NPU |  [SESR-M5-Quantized.tflite](https://huggingface.co/qualcomm/SESR-M5-Quantized/blob/main/SESR-M5-Quantized.tflite) 
| Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 1.048 ms | 0 - 10 MB | INT8 | NPU |  [SESR-M5-Quantized.so](https://huggingface.co/qualcomm/SESR-M5-Quantized/blob/main/SESR-M5-Quantized.so) 



## Installation

This model can be installed as a Python package via pip.

```bash
pip install "qai-hub-models[sesr_m5_quantized]"
```



## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device

Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your
Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`.

With this API token, you can configure your client to run models on the cloud
hosted devices.
```bash
qai-hub configure --api_token API_TOKEN
```
Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information.



## Demo off target

The package contains a simple end-to-end demo that downloads pre-trained
weights and runs this model on a sample input.

```bash
python -m qai_hub_models.models.sesr_m5_quantized.demo
```

The above demo runs a reference implementation of pre-processing, model
inference, and post processing.

**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.sesr_m5_quantized.demo
```


### Run model on a cloud-hosted device

In addition to the demo, you can also run the model on a cloud-hosted Qualcomm®
device. This script does the following:
* Performance check on-device on a cloud-hosted device
* Downloads compiled assets that can be deployed on-device for Android.
* Accuracy check between PyTorch and on-device outputs.

```bash
python -m qai_hub_models.models.sesr_m5_quantized.export
```

```
Profile Job summary of SESR-M5-Quantized
--------------------------------------------------
Device: SA8255 (Proxy) (13)
Estimated Inference Time: 1.04 ms
Estimated Peak Memory Range: 0.03-8.62 MB
Compute Units: NPU (26) | Total (26)


```





## Deploying compiled model to Android


The models can be deployed using multiple runtimes:
- TensorFlow Lite (`.tflite` export): [This
  tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a
  guide to deploy the .tflite model in an Android application.


- QNN (`.so` export ): This [sample
  app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html)
provides instructions on how to use the `.so` shared library  in an Android application.


## View on Qualcomm® AI Hub
Get more details on SESR-M5-Quantized's performance across various devices [here](https://aihub.qualcomm.com/models/sesr_m5_quantized).
Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)

## License
- The license for the original implementation of SESR-M5-Quantized can be found
  [here](https://github.com/quic/aimet-model-zoo/blob/develop/LICENSE.pdf).
- The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)

## References
* [Collapsible Linear Blocks for Super-Efficient Super Resolution](https://arxiv.org/abs/2103.09404)
* [Source Model Implementation](https://github.com/quic/aimet-model-zoo/tree/develop/aimet_zoo_torch/sesr)

## Community
* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI.
* For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com).