SIGN IN SIGN UP
apache / mxnet UNCLAIMED

Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more

[FEATURE] Restore Quantization API to MXNet (#19587) * Restore quantization files * Adapt quantization.py - Add/Remove modules * Adapt part of quantization tests to new API * fuse fc+tanh * Replace Module API with SymbolBlock in quantize_model * enabled test_quantization_mkldnn.py * Revert "fuse fc+tanh" This reverts commit a8b737a473ca6529a1969b748ea03c40e12c0798. Needs refactor of conv and fc common part * Enable tests from test_subgraph.py * Enable test_mobilenetv2_struct * Refactor test_subgraph.py * Reorder of conditions 'if calib_data is not None' and 'if not data_shapes' * Utilize optimize_for in quantization flow * remove duplicate imports * Add variable monitor callback * fix sanity * wip merge with bgawrych cannot do inplace convolution and the sum and input tesnsors are shared already remove cout spaces after if refactor if else * Rebase to master - remove with_seed * Add numpy support for quantization Conflicts: src/operator/subgraph/mkldnn/mkldnn_conv_property.h * enabled examples/quantization/imagenet_gen_qsym_mkldnn.py review fixes remove unused parameters change rgb small fix add alexnet exclude fix filename suffix refactor first conv exclude v1 v2 v3 fix names of layers fix bug * Add test to check different way of data generation for hybridize * Copy original network * Change num_calib_examples to num_calib_batches * enabling imagenet_inference.py * Add base class for collectors and feed custom with calib_layers * Some doc fixes after discussion * anko review - change all quantize_net_v2 to quantize_net * Make -s argument required * review fixes by mozga and anko * Fix bugs * Fix channel-wise quantization * Fix documentation formatting * mozga: fix review * Fix lint * Refactor calibration for variables * fix sanity * fix clang tidy * ciyong review fixes * Add verified models * Fix review: Tao and Xinyu Co-authored-by: Sylwester Fraczek <sylwester.fraczek@intel.com> Co-authored-by: grygielski <adam.grygielski@gmail.com>
2020-12-12 05:31:00 +01:00
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
import os
import sys
import mxnet as mx
curr_path = os.path.dirname(os.path.abspath(os.path.expanduser(__file__)))
sys.path.insert(0, os.path.join(curr_path, '../quantization'))
from mxnet.test_utils import set_default_device
[FEATURE] Restore Quantization API to MXNet (#19587) * Restore quantization files * Adapt quantization.py - Add/Remove modules * Adapt part of quantization tests to new API * fuse fc+tanh * Replace Module API with SymbolBlock in quantize_model * enabled test_quantization_mkldnn.py * Revert "fuse fc+tanh" This reverts commit a8b737a473ca6529a1969b748ea03c40e12c0798. Needs refactor of conv and fc common part * Enable tests from test_subgraph.py * Enable test_mobilenetv2_struct * Refactor test_subgraph.py * Reorder of conditions 'if calib_data is not None' and 'if not data_shapes' * Utilize optimize_for in quantization flow * remove duplicate imports * Add variable monitor callback * fix sanity * wip merge with bgawrych cannot do inplace convolution and the sum and input tesnsors are shared already remove cout spaces after if refactor if else * Rebase to master - remove with_seed * Add numpy support for quantization Conflicts: src/operator/subgraph/mkldnn/mkldnn_conv_property.h * enabled examples/quantization/imagenet_gen_qsym_mkldnn.py review fixes remove unused parameters change rgb small fix add alexnet exclude fix filename suffix refactor first conv exclude v1 v2 v3 fix names of layers fix bug * Add test to check different way of data generation for hybridize * Copy original network * Change num_calib_examples to num_calib_batches * enabling imagenet_inference.py * Add base class for collectors and feed custom with calib_layers * Some doc fixes after discussion * anko review - change all quantize_net_v2 to quantize_net * Make -s argument required * review fixes by mozga and anko * Fix bugs * Fix channel-wise quantization * Fix documentation formatting * mozga: fix review * Fix lint * Refactor calibration for variables * fix sanity * fix clang tidy * ciyong review fixes * Add verified models * Fix review: Tao and Xinyu Co-authored-by: Sylwester Fraczek <sylwester.fraczek@intel.com> Co-authored-by: grygielski <adam.grygielski@gmail.com>
2020-12-12 05:31:00 +01:00
from test_quantization import *
set_default_device(mx.gpu(0))