PyTorch Release Notesusers, see NVIDIA ® GPU Cloud ™ (NGC) container registry installation documentation based on your platform. ‣ Ensure that you have access and can log in to the NGC container registry. Refer to NGC Getting browser or by analyzing text reports. DL Prof is available on NGC or through a Python PIP wheel installation. ‣ The TensorCore example models are no longer provided in the core PyTorch container (previously browser or by analyzing text reports. DL Prof is available on NGC or through a Python PIP wheel installation. ‣ The TensorCore example models are no longer provided in the core PyTorch container (previously0 码力 | 365 页 | 2.94 MB | 1 年前3
rwcpu8 Instruction Install miniconda pytorchusername . Since /rwproject/kdd-db/ is a remote folder, it may take several minutes for the installation to finish. 3. Add the code that initializes Miniconda to your shell initialization script. if ~/.tchsrc exists, ~/.cshrc_user won't be loaded, so you need to remove ~/.tcshrc : 4. Log out and log in again. If Miniconda is successfully installed, you should be able to see the usage of conda0 码力 | 3 页 | 75.54 KB | 1 年前3
keras tutorialSquare, Netflix, Huawei and Uber are currently using Keras. This tutorial walks through the installation of Keras, basics of deep learning, Keras models, Keras layers, Keras modules and finally conclude .............................................................................. 1 2. Keras ― Installation ............................................................................................. .. 3 Keras Installation Steps ................................................................................................................................... 3 Keras Installation Using Python0 码力 | 98 页 | 1.57 MB | 1 年前3
PyTorch Tutorialclass account. • Miniconda is highly recommended, because: • It lets you manage your own Python installation • It installs locally; no admin privileges required • It’s lightweight and fits within your disk0 码力 | 38 页 | 4.09 MB | 1 年前3
Lecture 5: Gaussian Discriminant Analysis, Naive Bayes122 Warm Up (Contd.) Log-likelihood function ℓ(θ) = log m � i=1 pX,Y (x(i), y(i)) = log m � i=1 pX|Y (x(i) | y(i))pY (y(i)) = m � i=1 � log pX|Y (x(i) | y(i)) + log pY (y(i)) � where θ = {pX|Y Given m sample data, the log-likelihood is ℓ(ψ, µ0, µ1, Σ) = log m � i=1 pX,Y (x(i), y(i); ψ, µ0, µ1, Σ) = log m � i=1 pX|Y (x(i) | y(i); µ0, µ1, Σ)pY (y(i); ψ) = m � i=1 log pX|Y (x(i) | y(i); µ0 + m � i=1 log pY (y(i); ψ) Feng Li (SDU) GDA, NB and EM September 27, 2023 46 / 122 Gaussian Discriminant Analysis (Contd.) The log-likelihood function ℓ(ψ, µ0, µ1, Σ) = m � i=1 log pX|Y (x(i) |0 码力 | 122 页 | 1.35 MB | 1 年前3
Lecture Notes on Gaussian Discriminant Analysis, Naivey(i))}i=1,··· ,m, the log-likelihood is defined as ℓ(ψ, µ0, µ1, Σ) = log m � i=1 pX,Y (x(i), y(i); ψ, µ0, µ1, Σ) = log m � i=1 pX|Y (x(i) | y(i); µ0, µ1, Σ)pY (y(i); ψ) = m � i=1 log pX|Y (x(i) | y(i); + m � i=1 log pY (y(i); ψ)(8) where ψ, µ0, and σ are parameters. Substituting Eq. (5)∼(7) into Eq. (8) gives 2 us a full expression of ℓ(ψ, µ0, µ1, Σ) ℓ(ψ, µ0, µ1, Σ) = m � i=1 log pX|Y (x(i) | m � i=1 log pY (y(i); ψ) = � i:y(i)=0 log � 1 (2π)n/2|Σ|1/2 exp � −1 2(x − µ0)T Σ−1(x − µ0) �� + � i:y(i)=1 log � 1 (2π)n/2|Σ|1/2 exp � −1 2(x − µ1)T Σ−1(x − µ1) �� + m � i=1 log ψy(i)(1 −0 码力 | 19 页 | 238.80 KB | 1 年前3
Lecture 4: Regularization and Bayesian StatisticsJ(θ) = − 1 m m � i=1 [y(i) log(hθ(x(i))) + (1 − y(i)) log(1 − hθ(x(i)))] Adding a term for regularization J(θ) = − 1 m m � i=1 [y(i) log(hθ(x(i)))+(1−y(i)) log(1−hθ(x(i)))]+ λ 2m n � j=1 θ2 m � i=1 p(d(i); θ) MLE typically maximizes the log-likelihood instead of the likelihood ℓ(θ) = log L(θ) = log m � i=1 p(d(i); θ) = m � i=1 log p(d(i); θ) Maximum likelihood parameter estimation estimation θMLE = arg max θ ℓ(θ) = arg max θ m � i=1 log p(d(i); θ) Feng Li (SDU) Regularization and Bayesian Statistics September 20, 2023 13 / 25 Maximum-a-Posteriori Estimation (MAP) Maximum-a-Posteriori0 码力 | 25 页 | 185.30 KB | 1 年前3
【PyTorch深度学习-龙龙老师】-测试版202112对数运算,例如常见的loge 、log2 、log1 等,可以直接调用 torch.log()、 torch.log2()、torch.log10()等函数实现。自然对数loge 实现如下: In [98]: x = torch.arange(3).float() # 转换为浮点数 x = torch.exp(x) # 先指数运算 torch.log(x) # 再对数运算 Out[98]: Out[98]: tensor([0., 1., 2.]) 如果希望计算其它底数的对数,可以根据对数的换底公式: log? = loge loge ? 间接地通过 torch.log()实现。这里假设不调用 torch.log10()函数,通过换底公式算 loge ? loge 1 来间 接计,实现如下: In [99]: x = torch.tensor([1 PyTorch 基础 36 x = 10**x # 指数运算 torch.log(x)/torch.log(torch.tensor(10.)) # 换底公式计算 Log10 Out[99]: tensor([1., 2.]) 实现起来并不麻烦。实际中通常使用 torch.log()函数就够了。 4.9.4 矩阵相乘运算 神经网络中间包含了大量的矩阵相乘0 码力 | 439 页 | 29.91 MB | 1 年前3
机器学习课程-温州大学-03机器学习-逻辑回归合起来,我们得到逻辑回归模型的假设函数: 当? ? 大于等于0.5时,预测 y=1 当? ? 小于0.5时,预测 y=0 Sigmoid 函数 ?=?T? + ? ൯ L ̰? , ? = −?log(̰?) − (1 − ?)log(1 − ̰? 2.Sigmoid函数 注意:若表达式 ℎ ? = ? = ?0 + ?1?1 + ?2?2+. . . +???? + ? = ?T? + ?, 则?可以融入到 之比为 ? 1−?, 称为事件的发生比(the odds of experiencing an event) 其中?为随机事件发生的概率,?的范围为[0,1]。 取对数得到:log ? 1−?,而log ? 1−? = ?T? = ? 求解得到:? = 1 1+?−?T? = 1 1+?−? 10 2.Sigmoid函数 将?进行逻辑变换:? ? = 1 1+?− ̰? , ? = −?log(̰?) − (1 − ?)log(1 − ̰? 为了衡量算法在全部训练样本上的表现如何,我们需要定义一个算法的代价函 数,算法的代价函数是对?个样本的损失函数求和然后除以?: ? ? = 1 ? σ?=1 ? L ̰? (?) , ?(?) = 1 ? σ?=1 ? −?(?)log ̰? (?) − (1 − ?(?))log(1 − ̰? (?))0 码力 | 23 页 | 1.20 MB | 1 年前3
机器学习课程-温州大学-02深度学习-神经网络的编程基础y=1 当? ? 小于0.5时,预测 y=0 sigmoid 函数 ?=??? + ? ൯ ? ̰? , ? = −?log(̰?) − (1 − ?)log(1 − ̰? 6 逻辑回归 损失函数 ൯ ? ̰? , ? = −?log(̰?) − (1 − ?)log(1 − ̰? 为了衡量算法在全部训练样本上的表现如何,我们需要定义一个算法的代价函 数,算法的代价函数是对?个样本的损失函数求和然后除以 (?) , ?(?) = 1 ? σ?=1 ? −?(?)log ̰? (?) − (1 − ?(?))log(1 − ̰? (?)) 代价函数 ̰? 表示预测值 ? 表示真实值 7 逻辑回归的梯度下降 损失函数 ? ̰? , ? ൯ ? ̰? , ? = ? ?, ? = −?log(?) − (1 − ?)log(1 − ? ? = ̰? 设: 因为??(?,?) (1−?)) ⋅ ?(1 − ?) = ? − ? ?=??? + ? 8 逻辑回归的梯度下降 损失函数 ? ̰? , ? ൯ ? ̰? , ? = ? ?, ? = −?log(?) − (1 − ?)log(1 − ? ? = ̰? 设: 因为??(?,?) ?? = ?? ?? = (?? ??) ⋅ (?? ??), 并且?? ?? = ? ⋅ (1 − ?), 而0 码力 | 27 页 | 1.54 MB | 1 年前3
共 35 条
- 1
- 2
- 3
- 4













