Keras: 基于 Python 的深度学习库/tmp/.keras/ 作为备份。 Keras 配置文件是存储在 $HOME/.keras/keras.json 中的 JSON 文件。默认的配置文件如 下所示: { "image_data_format": "channels_last", "epsilon": 1e-07, "floatx": "float32", "backend": "tensorflow" } 它包含以下字段: Conv2D [source] keras.layers.Conv2D(filters, kernel_size, strides=(1, 1), padding='valid', data_format=None, dilation_rate=(1, 1), activation=None, use_bias=True, kernel_initializer='glorot_uniform' 当使用该层作为模型第一层时,需要提供 input_shape 参数(整数元组,不包含 样本表示的轴) ,例如,input_shape=(128, 128, 3) 表示 128x128 RGB 图像,在 data_format="channels_last" 时。 参数 • filters: 整数,输出空间的维度(即卷积中滤波器的输出数量)。 • kernel_size: 一个整数,或者 2 个整数表示的元组或列表,指明0 码力 | 257 页 | 1.19 MB | 1 年前3
keras tutorial{ "image_data_format": "channels_last", "epsilon": 1e-07, "floatx": "float32", "backend": "tensorflow" } Here, image_data_format represent the data format. epsilon represents the backend = theano in keras.json file. It is described below: keras.json { "image_data_format": "channels_last", "epsilon": 1e-07, "floatx": "float32", "backend": "theano" } information as specified below: >>> k.backend() 'tensorflow' >>> k.epsilon() 1e-07 >>> k.image_data_format() 'channels_last' >>> k.floatx() 'float32' Let us understand some of the significant backend0 码力 | 98 页 | 1.57 MB | 1 年前3
《Efficient Deep Learning Book》[EDL] Chapter 2 - Compression TechniquesDiscrete Cosine Transform (DCT), is a popular algorithm which is used in the JPEG format for image compression, and the MP3 format for audio. DCT breaks down the given input data into independent components the output for a given input). Quantization transformed our weight matrices to a quantized integer format. However, the model layers use floating point computations. It is critical that our models are accurate (test_x, test_y) = load_data() # You can train on the Fashion MNIST dataset, which has the exact same format # as MNIST, and is slightly harder. # (train_x, train_y), (test_x, test_y) = load_data(ds=tf.keras0 码力 | 33 页 | 1.96 MB | 1 年前3
《Efficient Deep Learning Book》[EDL] Chapter 5 - Advanced Compression Techniquessparsity, total = get_conv_block_sparsity(block) print('Block: {} Sparsity: {}% Total Weights: {}'.format(block.name, sparsity, total)) Output Block: conv_block_0 Sparsity: 0.0% Total Weights: 864 Block: model seems to have trained to a reasonable accuracy. We can now persist it to disk in the SavedModel format. import tempfile _, keras_file = tempfile.mkstemp('.h5') print('Saving model to: ', keras_file) embeddings themselves can be saved in an optimized format. Either the table in its entirety can be converted back to the original floating point tensor format either when loading the table into memory, or only0 码力 | 34 页 | 3.18 MB | 1 年前3
AI大模型千问 qwen 中文文档chat(), we directly use model.generate() # But you need to use tokenizer.apply_chat_template() to format your inputs as shown␣ �→below prompt = "Give me a short introduction to large language model." messages chat(), we directly use model.generate() # But you need to use tokenizer.apply_chat_template() to format your inputs as shown␣ �→below prompt = "Give me a short introduction to large language model." messages model named Qwen..."} ] 然后只需通过一行代码运行校准过程: 1.8. GPTQ 17 Qwen import logging logging.basicConfig( format="%(asctime)s %(levelname)s [%(name)s] %(message)s", level=logging.INFO,␣ �→datefmt="%Y-%m-%d %H:%M:%S"0 码力 | 56 页 | 835.78 KB | 1 年前3
《Efficient Deep Learning Book》[EDL] Chapter 7 - Automationsearch_results.append(min_loss) fmt = 'Trial: {} learning_rate: {} layer_size: {} loss: {}' print(fmt.format(trial_id, learning_rate, layer_size, min_loss)) best_trial_id = np.argmin(search_results) best_loss min(search_results) print('\n=============== Search Summary ===============') print('Best Trial: {} Loss: {}'.format(best_trial_id, best_loss)) Trial: 0 learning_rate: 0.01 layer_size: 5 loss: 0.15629929304122925 reward, accuracy = child_manager.get_rewards(config) print( 'Episode: {} Reward: {} Accuracy: {}'.format( episode, reward, accuracy ) ) # Store predicted child and its rewards controller.save_trial(predictions0 码力 | 33 页 | 2.48 MB | 1 年前3
PyTorch Release Notesenhancements. ‣ PyTorch container image version 22.06 is based on 1.13.0a0+340c412. ‣ The TF32 numerical format is enabled by default for cuBLAS and cuDNN operations starting with the 22.06 release. If you encounter the PyTorch container does not build Caffe2 anymore. If scripted models were exported in the legacy format (using our 19.09 or previous NGC containers corresponding to PyTorch 1.2.0 or previous releases) and we recommend to disable cuDNN via torch.backends.cudnn.enabled = False. ‣ Channels-last memory format is experimental in the 20.07 container. Potential convergence issues for ResNet variants are being0 码力 | 365 页 | 2.94 MB | 1 年前3
《Efficient Deep Learning Book》[EDL] Chapter 4 - Efficient Architecturesthem. When working with deep learning models and inputs such as text, which are not in numerical format, having an algorithmic way to meaningfully represent these inputs using a small number of numerical 'NaturalPlace', 'Village', 'Animal', 'Plant', 'Album', 'Film', 'WrittenWork'] The data is in CSV format with columns: class-id, title and description. The class id is 1-indexed, and the other two fields0 码力 | 53 页 | 3.92 MB | 1 年前3
【PyTorch深度学习-龙龙老师】-测试版202112# 打印待优化张量,需要通过 detach 函数断开梯度连接 print ('step {}: x = {}, f(x) = {}' .format(step, x.detach().numpy(), y.detach().numpy())) 经过 200 次迭代更新后,程序可以找到一个极小值解,此时函数值接近于 0。找到的数值 解为: 绘制不同层数的网络决策边界曲线 preds = model.predict_classes(np.c_[XX.ravel(), YY.ravel()]) title = "网络层数({})".format(n) file = "网络容量%f.png"%(2+n*1) make_plot(X_train, y_train, title, file, XX, YY, preds) 层数的决策边界曲线 preds = model.predict_classes(np.c_[XX.ravel(), YY.ravel()]) title = "Dropout({})".format(n) file = "Dropout%f.png"%(n) make_plot(X_train, y_train, title, file, XX, YY, preds)0 码力 | 439 页 | 29.91 MB | 1 年前3
PyTorch OpenVINO 开发实战系列教程第一篇parameters loss.backward() # Updating parameters optimizer.step() print('epoch {}, loss {}'.format(epoch, loss.item())) 这部分的代码注释都很清楚了,这里就不再赘述。 第五步:根据训练得到的参数,使用模型预测得到回归直线并 显示,代码如下: predicted =0 码力 | 13 页 | 5.99 MB | 1 年前3
共 12 条
- 1
- 2













