keras tutorialTheano, TensorFlow, Caffe, Mxnet etc., Keras is one of the most powerful and easy to use python library, which is built on top of popular deep learning libraries like TensorFlow, Theano, etc., for creating or Cognitive Toolkit (CNTK). Theano is a python library used for fast numerical computation tasks. TensorFlow is the most famous symbolic math library used for creating neural networks and deep learning Linux or Mac) Python version 3.5 or higher. Python Keras is python based neural network library so python must be installed on your machine. If python is properly installed on your machine, then0 码力 | 98 页 | 1.57 MB | 1 年前3
《Efficient Deep Learning Book》[EDL] Chapter 6 - Advanced Learning Techniques - Technical Reviewtest_dataset.shuffle(test_dataset.cardinality()).batch(BATCH_SIZE) We will import the tensorflow_text library so that we can use the BERT model which relies on certain tensorflow ops. import os # tensorflow_text ops used in our model. import tensorflow_text as tf_text Next we will import the tensorflow_hub library so that we can import pre-trained BERT models directly from Tensorflow Hub. import tensorflow_hub cross-entropy loss. We would refer you to the SimCLR paper for more details about the chosen loss functions and other alternatives considered. Once the desired test loss is achieved, the projection head0 码力 | 31 页 | 4.03 MB | 1 年前3
《Efficient Deep Learning Book》[EDL] Chapter 3 - Learning Techniquescode samples are provided to bridge the theory and practice gap. We have prepared a few helper functions: load_image(), show_image(), transform() and transform_and_show(), which will be used to transform here as a Jupyter notebook for you to experiment. The following code snippet sets up the modules, functions and variables that will be used later on. It initializes the Natural Language Toolkit (NLTK) and up the required libraries, and loading the training and validation sets. We leverage the nlpaug library to perform the augmentations. It provides a simple 5 Maas, Andrew, et al. "Learning word vectors0 码力 | 56 页 | 18.93 MB | 1 年前3
PyTorch Tutoriallines to code in comparison. • It is easy to debug and understand the code. • Python usage − This library is considered to be Pythonic which smoothly integrates with the Python data science stack. • It can change them during runtime. • It includes many layers as Torch. • It includes lot of loss functions. • It allows building networks whose structure is dependent on computation itself. • NLP: account like • TensorboardX (monitor training) • PyTorchViz (visualise computation graph) • Various other functions • loss (MSE,CE etc..) • optimizers Prepare Input Data •Load data •Iterate over examples Train0 码力 | 38 页 | 4.09 MB | 1 年前3
《Efficient Deep Learning Book》[EDL] Chapter 5 - Advanced Compression Techniquestensorflow as tf from functools import reduce from matplotlib import pyplot as plt We define two functions sparsify_smallest() and compress(). The sparsify_smallest() sets the absolute smallest weights in model and wraps the prunable blocks for sparse training using TFMOT (Tensorflow Model Optimization) library. In this case, we prune the 50% of the weights in each prunable block using magnitude-based pruning performance. Let's go ahead and strip the pruning weights from the model that were added by the TFMOT library as shown below. # Strip the pruning wrappers from the model. stripped_model = tfmot.sparsity.keras0 码力 | 34 页 | 3.18 MB | 1 年前3
PyTorch Release Noteslayers, deep learning optimizers, data loading utilities, and multi-gpu, and multi-node support. Functions are executed immediately instead of enqueued in a static graph, improving ease of use and provides Framework containers are no longer tested on Pascal GPU architectures. ‣ Transformer Engine is a library for accelerating Transformer models on NVIDIA GPUs. It includes support for 8-bit floating point Framework containers are no longer tested on Pascal GPU architectures. ‣ Transformer Engine is a library for accelerating Transformer models on NVIDIA GPUs. It includes support for 8-bit floating point0 码力 | 365 页 | 2.94 MB | 1 年前3
Lecture Notes on Support Vector Machinejhj(ω) (12) In fact, L(ω, α, β ) can be treated as a weighted sum of the objective and constraint functions. αi is the so-called Lagrange multiplier associated with gi(ω) ≤ 0, while β i is the one associated supposed to the original constrained minimization problem); ii) G is an infimum of a set of affine functions and thus is a concave function regardless of the original problem; iii) G can be −∞ for some α and Karush-Kuhn-Tucker (KKT) Conditions We assume that the objective function and the inequality constraint functions are differentiable. Again, let ω∗ and (α∗, β ∗) be any primal and dual optimal points, respectively0 码力 | 18 页 | 509.37 KB | 1 年前3
Machine Learning Pytorch TutorialTesting Neural Networks in Pytorch ● Dataset & Dataloader ● Tensors ● torch.nn: Models, Loss Functions ● torch.optim: Optimization ● Save/load models Prerequisites ● We assume you are already familiar mean() ● Addition z = x + y ● Subtraction z = x - y ● Power y = x.pow(2) Common arithmetic functions are supported, such as: Tensors – Common Operations Tensors – Common Operations ● Transpose: official documentation for more information on data types. Tensors – PyTorch v.s. NumPy ● Many functions have the same names as well PyTorch NumPy x.reshape / x.view x.reshape x.squeeze() x.squeeze()0 码力 | 48 页 | 584.86 KB | 1 年前3
AI大模型千问 qwen 中文文档1: send the conversation and available functions to the model messages = [{ 'role': 'user', 'content': "What's the weather like in San Francisco?" }] functions = [{ (续下页) 38 Chapter 1. 文档 Qwen (接上页) print('# Assistant Response 1:') responses = [] for responses in llm.chat(messages=messages, functions=functions, stream=True): print(responses) messages.extend(responses) # extend conversation with assistant's function # Note: the JSON response may not always be valid; be sure to handle errors available_functions = { 'get_current_weather': get_current_weather, } # only one function in this example, but you0 码力 | 56 页 | 835.78 KB | 1 年前3
Lecture 5: Gaussian Discriminant Analysis, Naive Bayesy) P(a1 ≤ X ≤ b1, a2 ≤ Y ≤ b2) = � b1 a1 � b2 a2 f (x, y)dxdy Marginal probability density functions fX(x) = � ∞ −∞ f (x, y)dy for − ∞ < x < ∞ fY (x) = � ∞ −∞ f (x, y)dx for − ∞ < y < ∞ Extension f : Rn → R be the objective function, gj : Rn → R (with j = 1, · · · , m) be the m constraints functions, all of which have continuous fist derivatives. Let x∗ be an optimal solution to the following optimization �m i=1 1(y(i) = y) + 1 m + k Feng Li (SDU) GDA, NB and EM September 27, 2023 82 / 122 Convex Functions A set C is convex if the line segment between any two points in C lies in C, i.e., for ∀x1, x20 码力 | 122 页 | 1.35 MB | 1 年前3
共 29 条
- 1
- 2
- 3













