中國國民黨建議的逃生手冊

李志銘 也來談談KMT版的小橘書 民國六十八年(1979)在國民黨主政下,由國防部總政治作戰部編印的《揭穿中共統戰陰謀答問》 這本小冊子裡雖然只有短短7頁8題,但卻字字珠璣,尤其放到今天來看,更是完全凸顯了中國國民黨從反共到舔共的事實。 . 譬如其中第三題問::國共會有兩度合作、一次和談的歷史,爲什麼不能再談? 答:我們就因爲過去每次都上了中共的當,最後失去了大陸,所以現在不能再和他們談了。 第一次是民國十三年的「容共」。中共聲稱服膺三民主義,參加國民革命,但他們却發展自己的組織,分化革命陣營,使北伐革命幾乎失敗。 第二次在民國廿六年的所謂「聯合抗日」。中共趁日本向我國侵略,聲稱接受政府領導,擁護三民主義,取消叛亂政權和紅軍名義,共同抵抗日本。實際上他們却擴張自己勢力,並勾結日本偷襲國軍,打下了他們擴大叛亂的基礎。 最後一次是戰後的「國共和談」。經過美國的調處,中共以邊談邊打的策略,以「談判」保護自己,以軍事行動擴張地盤,並在此談談打打的過程中,挑撥中美關係,動搖我民心土氣,最後終於竊據大陸。我們有了這些經驗,知道和談就是解除自己武裝向他們投降,所以再不能和中共談判了。 . 又譬如其中第七題問:中共現在已不講「解放臺灣」「血洗臺灣」,而講「回歸祖國、完成統一」是不是他們已放棄使用武力? 答:共產黨最善於在名詞上玩魔術,我們千萬不能上當,一定要找出其所用名詞的真正涵義。基本上,「解放臺灣」已訂入了中共的「憲法」,目標已定,正如中共自己所說的,十年、二十年,甚至一百年、一千年,總要達到目標。所以併吞臺灣這個目標是不會改變的。 . . 《揭穿中共統戰陰謀答問》全書PDF電子檔下載 https://reurl.cc/6b5zvM

Setup Intel OpenVINO and AWS Greengrass on Ubuntu

Setup Intel OpenVINO and AWS Greengrass on Ubuntu 
  1. First set the conversion tool ModelOptimizer: https://software.intel.com/en-us/articles/OpenVINO-ModelOptimizer
  2. Command : `source /bin/setupvars.sh`
  3. Command : `cd /deployment_tools/model_optimizer/install_prerequisites`
  4. Command : `sudo -E ./install_prerequisites.sh`
  5. Model Optimizer uses Python 3.5, whereas Greengrass samples use Python 2.7. In order for Model Optimizer not to influence the global Python configuration, activate a virtual environment as below:
  6. Command : `sudo ./install_prerequisites.sh venv`
  7. Command : `cd /deployment_tools/model_optimizer`
  8. Command : `source venv/bin/activate`
  9. For CPU, models should be converted with data type FP32 and for GPU/FPGA, it should be with data type FP16 for the best performance.
  10. For classification using BVLC Alexnet model:
    Command : `python mo.py --framework caffe --input_model /bvlc_alexnet.caffemodel --input_proto /deploy.prototxt --data_type --output_dir --input_shape [1,3,227,227]`
  11. For object detection using SqueezeNetSSD-5Class model:
    Command : `python mo.py --framework caffe --input_model /SqueezeNetSSD-5Class.caffemodel --input_proto /SqueezeNetSSD-5Class.prototxt --data_type --output_dir `
  12. where is the location where the user downloaded the models, is FP32 or FP16 depending on target device, and is the directory where the user wants to store the IR. IR contains .xml format corresponding to the network structure and .bin format corresponding to weights. This .xml should be passed to mentioned in the Configuring the Lambda Function section. In the BVLC Alexnet model, the prototxt defines the input shape with batch size 10 by default. In order to use any other batch size, the entire input shape needs to be provided as an argument to the model optimizer. For example, if you want to use batch size 1, you can provide --input_shape [1,3,227,227].
Greengrass sample is in : 
/opt/intel/computer_vision_sdk/inference_engine/samples/python_samples/greengrass_samples/

However, there are some changes in the openvino_toolkit_p_2018.3.343 version of the path that need to be modified (python2):

LD_LIBRARY_PATH : 
/opt/intel/computer_vision_sdk/opencv/share/OpenCV/3rdparty/lib:/opt/intel/computer_vision_sdk/opencv/lib:/opt/intel/opencl:/opt/intel/computer_vision_sdk/deployment_tools/inference_engine/external/cldnn/lib:/opt/intel/computer_vision_sdk/deployment_tools/inference_engine/external/mkltiny_lnx/lib:/opt/intel/computer_vision_sdk/deployment_tools/inference_engine/lib/ubuntu_16.04/intel64:/opt/intel/computer_vision_sdk/deployment_tools/model_optimizer/model_optimizer_caffe/bin:/opt/intel/computer_vision_sdk/openvx/lib

PYTHONPATH : 

/opt/intel/computer_vision_sdk/python/python2.7/ubuntu16/

PARAM_CPU_EXTENSION_PATH : 

/opt/intel/computer_vision_sdk/deployment_tools/inference_engine/lib/ubuntu_16.04/intel64/libcpu_extension_avx2.so

Comments

本月熱門文章

LINE 貼圖推薦:裝潢師傅貼圖

台灣科技業的創新能量在哪邊?

跳脫傳統NAS思維 QNAP用 AI / IoT 助產學轉型

Bonbons Studio 胖胖瑪德蓮 中秋禮盒開箱

生研 暑訓 day 2

Intel RealSense 以及 Stereo 3D SDK 要讓你有不同的體驗 !

2019 Acer文回顧 經過了這些年 Acer變得怎麼樣?

java script 對話方式

2012 Android Tablet Best buy - Acer A700