diff --git a/README.md b/README.md
index 4f20604..363e6ce 100644
--- a/README.md
+++ b/README.md
@@ -10,7 +10,7 @@
This is the official project of our paper Is a Green Screen Really Necessary for Real-Time Portrait Matting?
-MODNet is a trimap-free model for portrait matting in real time (on a single GPU).
+MODNet is a trimap-free model for portrait matting in real time under changing scenes.
---
@@ -23,7 +23,7 @@
## Video Matting Demo
We provide two real-time portrait video matting demos based on WebCam.
-If you have an Ubuntu system, we recommend you to try the [offline demo](demo/video_matting) to get a higher *fps*. Otherwise, you can access the [online Colab demo](https://colab.research.google.com/drive/1Pt3KDSc2q7WxFvekCnCLD8P0gBEbxm6J?usp=sharing).
+If you have an Ubuntu system, we recommend you to try the [offline demo](demo/video_matting/webcam) to get a higher *fps*. Otherwise, you can access the [online Colab demo](https://colab.research.google.com/drive/1Pt3KDSc2q7WxFvekCnCLD8P0gBEbxm6J?usp=sharing).
## Image Matting Demo
diff --git a/demo/image_matting/README.md b/demo/image_matting/colab/README.md
similarity index 100%
rename from demo/image_matting/README.md
rename to demo/image_matting/colab/README.md
diff --git a/demo/image_matting/inference.py b/demo/image_matting/colab/inference.py
similarity index 100%
rename from demo/image_matting/inference.py
rename to demo/image_matting/colab/inference.py
diff --git a/demo/video_matting/README.md b/demo/video_matting/webcam/README.md
similarity index 70%
rename from demo/video_matting/README.md
rename to demo/video_matting/webcam/README.md
index 8847149..3b81a7a 100644
--- a/demo/video_matting/README.md
+++ b/demo/video_matting/webcam/README.md
@@ -1,18 +1,17 @@
## MODNet - WebCam-Based Portrait Video Matting Demo
-This is a MODNet portrait video matting demo based on WebCam. It will call your local WebCam and display the matting results in real time.
+This is a MODNet portrait video matting demo based on WebCam. It will call your local WebCam and display the matting results in real time. The demo can run under CPU or GPU.
### 1. Requirements
The basic requirements for this demo are:
- Ubuntu System
- WebCam
-- Nvidia GPU with CUDA
- Python 3+
**NOTE**: If your device does not satisfy the above conditions, please try our [online Colab demo](https://colab.research.google.com/drive/1Pt3KDSc2q7WxFvekCnCLD8P0gBEbxm6J?usp=sharing).
### 2. Introduction
-We use ~400 unlabeled video clips (divided into ~50,000 frames) downloaded from the internet to perform SOC to adapt MODNet to the video domain. Nonetheless, due to insufficient labeled training data (~3k labeled foregrounds), our model may still make errors in portrait semantics estimation under challenging scenes. Besides, this demo does not currently support the OFD trick, which will be provided soon.
+We use ~400 unlabeled video clips (divided into ~50,000 frames) downloaded from the internet to perform SOC to adapt MODNet to the video domain. **Nonetheless, due to insufficient labeled training data (~3k labeled foregrounds), our model may still make errors in portrait semantics estimation under challenging scenes.** Besides, this demo does not currently support the OFD trick, which will be provided soon.
For a better experience, please:
@@ -33,18 +32,18 @@ We recommend creating a new conda virtual environment to run this demo, as follo
2. Download the pre-trained model from this [link](https://drive.google.com/file/d/1Nf1ZxeJZJL8Qx9KadcYYyEmmlKhTADxX/view?usp=sharing) and put it into the folder `MODNet/pretrained/`.
-3. Create a conda virtual environment named `modnet-webcam` and activate it:
+3. Create a conda virtual environment named `modnet` (if it doesn't exist) and activate it:
```
- conda create -n modnet-webcam python=3.6
- source activate modnet-webcam
+ conda create -n modnet python=3.6
+ source activate modnet
```
4. Install the required python dependencies (here we use PyTorch==1.0.0):
```
- pip install -r demo/video_matting/requirements.txt
+ pip install -r demo/video_matting/webcam/requirements.txt
```
5. Execute the main code:
```
- python -m demo.video_matting.webcam
+ python -m demo.video_matting.webcam.run
```
diff --git a/demo/video_matting/requirements.txt b/demo/video_matting/webcam/requirements.txt
similarity index 100%
rename from demo/video_matting/requirements.txt
rename to demo/video_matting/webcam/requirements.txt
diff --git a/demo/video_matting/webcam.py b/demo/video_matting/webcam/run.py
similarity index 82%
rename from demo/video_matting/webcam.py
rename to demo/video_matting/webcam/run.py
index fc09aeb..3aaa1a3 100644
--- a/demo/video_matting/webcam.py
+++ b/demo/video_matting/webcam/run.py
@@ -19,8 +19,17 @@ torch_transforms = transforms.Compose(
print('Load pre-trained MODNet...')
pretrained_ckpt = './pretrained/modnet_webcam_portrait_matting.ckpt'
modnet = MODNet(backbone_pretrained=False)
-modnet = nn.DataParallel(modnet).cuda()
-modnet.load_state_dict(torch.load(pretrained_ckpt))
+modnet = nn.DataParallel(modnet)
+
+GPU = True if torch.cuda.device_count() > 0 else False
+if GPU:
+ print('Use GPU...')
+ modnet = modnet.cuda()
+ modnet.load_state_dict(torch.load(pretrained_ckpt))
+else:
+ print('Use CPU...')
+ modnet.load_state_dict(torch.load(pretrained_ckpt, map_location=torch.device('cpu')))
+
modnet.eval()
print('Init WebCam...')