【PyTorch深度学习-龙龙老师】-测试版202112编码函数,depth 设置向量长度 out = torch.zeros(label.size(0), depth) idx = torch.LongTensor(label).view(-1, 1) out.scatter_(dim=1, index=idx, value=1) return out y = torch.tensor([0,1,2 # x: [b, 1, 28, 28], y: [512] # 打平操作:[b, 1, 28, 28] => [b, 784] x = x.view(x.size(0), 28*28) # 送入网络模型,=> [b, 10] out = model(x) # 标签进行 one-hot transpose 操作、复制数据 tile 操作等,下面将一一 介绍。 4.7.1 改变视图 在介绍改变视图 reshape 操作之前,先来认识一下张量的存储(Storage)和视图(View)的 概念。张量的视图就是人们理解张量的方式,比如 shape 为[2,3,4,4]的张量?,从逻辑上可 以理解为 2 张图片,每张图片 4 行 4 列,每个位置有 RGB 3 个通道的数据;张量的存储体0 码力 | 439 页 | 29.91 MB | 1 年前3
PyTorch Release NotesTorch-TensorRT 1.1.0a0 ‣ NVIDIA DALI® 1.16.0 ‣ MAGMA 2.6.2 ‣ Jupyter and JupyterLab: ‣ Jupyter Client 6.0.0 PyTorch Release 22.08 PyTorch RN-08516-001_v23.07 | 93 ‣ Jupyter Core 4.6.1 ‣ Jupyter Torch-TensorRT 1.1.0a0 ‣ NVIDIA DALI® 1.15.0 ‣ MAGMA 2.6.2 ‣ Jupyter and JupyterLab: ‣ Jupyter Client 6.0.0 PyTorch Release 22.07 PyTorch RN-08516-001_v23.07 | 100 ‣ Jupyter Core 4.6.1 ‣ Jupyter Torch-TensorRT 1.1.0a0 ‣ NVIDIA DALI® 1.14.0 ‣ MAGMA 2.6.2 ‣ Jupyter and JupyterLab: ‣ Jupyter Client 6.0.0 PyTorch Release 22.06 PyTorch RN-08516-001_v23.07 | 107 ‣ Jupyter Core 4.6.1 ‣ Jupyter0 码力 | 365 页 | 2.94 MB | 1 年前3
PyTorch OpenVINO 开发实战系列教程第一篇reshape 函数之外,还有另外一个基于 tensor 的维度 转换方法 tensor.view(), 它的用法代码演示如下: x = torch.randn(4, 4) print(x.size()) x = x.view(-1, 8) print(x.size()) x = x.view(1, 1, 4, 4) print(x.size()) 运行结果如下: torch.Size([4 Size([2, 8]) torch.Size([1, 1, 4, 4]) 其中 torch.randn(4, 4) 是创建一个 4x4 的随机张量;x.view(-1, 8) 表示转换为每行八列的,-1 表示自动计算行数;x.view(1, 1, 4, 4) 表示转换为 1x1x4x4 的四维张量。其中 torch.size 表示 输出数组维度大小。 ● 其它属性操作 通道交换与寻找最大值是 print(x.size()) x = torch.tensor([2., 3., 4.,12., 3., 5., 8., 1.]) print(torch.argmax(x)) x = x.view(-1, 4) print(x.argmax(1)) 运行结果如下: torch.Size([5, 5, 3]) torch.Size([3, 5, 5]) tensor(3) tensor([30 码力 | 13 页 | 5.99 MB | 1 年前3
AI大模型千问 qwen 中文文档openai_api_base = "http://localhost:8000/v1" client = OpenAI( (续下页) 1.2. 快速开始 5 Qwen (接上页) api_key=openai_api_key, base_url=openai_api_base, ) chat_response = client.chat.completions.create( model="Qwen/Qwen1 "EMPTY" openai_api_base = "http://localhost:8000/v1" client = OpenAI( api_key=openai_api_key, base_url=openai_api_base, ) chat_response = client.chat.completions.create( model="Qwen/Qwen1.5-7B-Chat-AWQ" "EMPTY" openai_api_base = "http://localhost:8000/v1" client = OpenAI( api_key=openai_api_key, base_url=openai_api_base, ) chat_response = client.chat.completions.create( model="Qwen/Qwen1.5-7B-Chat-GPTQ-Int8"0 码力 | 56 页 | 835.78 KB | 1 年前3
pytorch 入门笔记-03- 神经网络relu(self.conv1(x)), 2) x = F.max_pool2d(F.relu(self.conv2(x)), 2) # 改变数据的维度 x = x.view(-1, self.num_flat_features(x)) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) MSELoss是一个比较简单的损失函数,它计算输出和目标间的均方误差, 例如: output = net(input) target = torch.rand(10) target = target.view(1, -1) criterion = nn.MSELoss() loss = criterion(output, target) print(loss) tensor(0.4526, gra .grad_fn 属性,将看到如下所示的计算图。 input -> conv2d -> relu -> maxpool2d -> conv2d -> relu -> maxpool2d -> view -> linear -> relu -> linear -> relu -> linear -> MSELoss -> loss 所以,当我们调用 loss.backward() 时,整张计算图都会0 码力 | 7 页 | 370.53 KB | 1 年前3
机器学习课程-温州大学-03深度学习-PyTorch入门ndim x.dim() x.size x.nelement() 形状操作 x.reshape x.reshape(相当于 tensor.contiguous().view()); x.view x.flatten x.view(-1);nn Flatten() 类型转换 np.floor(x) torch.floor(x); x.floor() 比较 np.less x.lt np.less_equal/np0 码力 | 40 页 | 1.64 MB | 1 年前3
深度学习与PyTorch入门实战 - 09. 维度变换Tensor维度变换 主讲人:龙良曲 Operation ▪ View/reshape ▪ Squeeze/unsqueeze ▪ Transpose/t/permute ▪ Expand/repeat 2 View reshape ▪ Lost dim information 3 Flexible but prone to corrupt 4 Squeeze v.s. unsqueeze0 码力 | 16 页 | 1.66 MB | 1 年前3
《TensorFlow 快速入门与实战》3-TensorFlow基础概念解析�������������������CPU�GPU��������������� �������������� TensorFlow ���� Client Server (local machine) Worker /cpu:0 Worker /gpu:0 TensorFlow ���� Client Server (local machine) RunStep() Worker /cpu:0 Worker0 码力 | 50 页 | 25.17 MB | 1 年前3
《Efficient Deep Learning Book》[EDL] Chapter 2 - Compression Techniqueswe tolerate? Let us slowly build up to that by exploring how quantization can help us. A Generic View of Quantization Quantization is a common compression technique that has been used across different matrix. D is often a one-dimension vector, hence the addition is cheap both from the latency point of view and size wise (since C dominates the size). In fact, the general formulation of Y = XW + b, is the0 码力 | 33 页 | 1.96 MB | 1 年前3
Experiment 1: Linear Regressionshould get a figure similar to Fig. 2. If you are using Matlab/Octave, you can use the orbit tool to view this plot from different viewpoints. What is the relationship between this 3D surface and the value0 码力 | 7 页 | 428.11 KB | 1 年前3
共 15 条
- 1
- 2













