Lecture 1: Overviewunlabeled example in the environment Learner can construct an arbitrary example and query an oracle for its label Learner can design and run experiments directly in the environment without any human guidance a set of correlated variables, and we would like to discover which ones are most correlated with which others This can be represented by a graph, in which nodes represent variables, and edges represent direct dependence between variables Feng Li (SDU) Overview September 6, 2023 32 / 57 Unsupervised Learning: Matrix Completion Sometimes we have missing data, that is, variables whose values are unknown0 码力 | 57 页 | 2.41 MB | 1 年前3
《Efficient Deep Learning Book》[EDL] Chapter 2 - Compression Techniquesapproximately the same . Such a model is useful if we want to deploy a model in a space constrained environment like a mobile device. To summarize, compression techniques help to achieve an efficient representation floating point value in the range [-10.0, 10.0]. We need to transmit a collection (vector) of these variables over an expensive communication channel. Can we use quantization to reduce transmission size and the repository in the form of Jupyter notebooks. You can run the notebooks in Google’s Colab environment which provides free access to CPU, GPU, and TPU resources. You can also run this locally on your0 码力 | 33 页 | 1.96 MB | 1 年前3
《Efficient Deep Learning Book》[EDL] Chapter 7 - Automationmath.square(Y - output)) grads = tape.gradient(loss, model.trainable_variables) opt.apply_gradients(zip(grads, model.trainable_variables)) losses.append(loss.numpy()) min_loss = np.min(losses) search_results Mnasnet maximizes the following objective: where is a weight factor defined as, such that and variables control the reward penalty for latency violation. In addition to the multiobjective optimization controller.rnn.trainable_variables) for index, grad in enumerate(grads): grads[index] = tf.multiply(grad, reward) self.optimizer.apply_gradients( zip(grads, controller.rnn.trainable_variables) ) The next chunk0 码力 | 33 页 | 2.48 MB | 1 年前3
keras tutorialquite easy. Follow below steps to properly install Keras on your system. Step 1: Create virtual environment Virtualenv is used to manage Python packages for different projects. This will be helpful to a virtual environment while developing Python applications. Linux/Mac OS Linux or mac OS users, go to your project root directory and type the below command to create virtual environment, python3 keras Step 2: Activate the environment This step will configure python and pip executables in your shell path. Linux/Mac OS Now we have created a virtual environment named “kerasvenv”. Move to the0 码力 | 98 页 | 1.57 MB | 1 年前3
【PyTorch深度学习-龙龙老师】-测试版202112运行计算图阶段,此处代码需要使用 tf 1.x 版本运行 # 创建运行环境 sess = tf.InteractiveSession() # 初始化步骤也需要作为操作运行 init = tf.global_variables_initializer() sess.run(init) # 运行初始化操作,完成初始化 # 运行输出端子,需要给输入端子赋值 c_numpy = sess.run(c_op, feed_dict={a_ph: 网址进入 Anaconda 下载页面,选择 Python 最新版本的下载链接即可下载,下载完成后安 装即可进入安装程序。如图 1.22 所示,勾选”Add Anaconda to my PATH environment variable”一项,这样可以通过命令行方式调用 Anaconda 程序。如图 1.23 所示,安装程序 询问是否连带安装 VS Code 软件,选择 Skip 即可。整个安装流程约持续 捷之处。Sequential 对象的 trainable_variables 和 variables 包含了所有层的待优化张量列表 和全部张量列表,例如: In [3]: # 打印网络的待优化参数名与 shape 预览版202112 第 8 章 PyTorch 网络模型 4 for p in network.trainable_variables: print(p.name, p.shape)0 码力 | 439 页 | 29.91 MB | 1 年前3
AI大模型千问 qwen 中文文档'model_server': 'dashscope', # 'api_key': 'YOUR_DASHSCOPE_API_KEY', # It will use the `DASHSCOPE_API_KEY' environment variable if 'api_key' is not␣ �→set here. # Use your own model service compatible with OpenAI FAISSWrapper.from_documents(docs, embeddings) prompt = PromptTemplate( template=PROMPT_TEMPLATE, input_variables=["context_str", "question"] ) chain_type_kwargs = {"prompt": prompt, "document_variable_name":0 码力 | 56 页 | 835.78 KB | 1 年前3
Lecture 5: Gaussian Discriminant Analysis, Naive BayesProbability Theory Review Sample space, events and probability Conditional probability Random variables and probability distributions Joint probability distribution Independence Conditional probability is a function of the outcome of a ran- domized experiment X : S → R Examples: Discrete random variables (S is discrete) X(s) = True if a randomly drawn person (s) from our class (S) is female X(s) = Continuous random variables (S is continuous) X(s) = r be the heart rate of a randomly drawn person s in our class S Feng Li (SDU) GDA, NB and EM September 27, 2023 7 / 122 Random Variables Real valued0 码力 | 122 页 | 1.35 MB | 1 年前3
Lecture Notes on Support Vector Machinebe as small as possible, by minimizing the sum of the slack variables � i ξi. We reformulating the SVM problem by introducing slack variables ξi min w,b,ξ 1 2∥w∥2 + C m � i=1 ξi (46) s.t. y(i)(wT x(i) αmy(m) Therefore, in the Sequential Minimal Optimization (SMO) algorithm, we opti- mize two of the variables at one time. We first summarize the general form of the SMO algorithm in Algorithm 1. The algorithm optimization process of the SMO algorithm (i.e., Line 4 in Algorithm 1). By treating α1 and α2 as variables while the others as known quantities, the objective function (56) can be re- written as J (α+ 10 码力 | 18 页 | 509.37 KB | 1 年前3
Lecture 6: Support Vector Machineminimizing the sum of the slack variables � i ξi Feng Li (SDU) SVM December 28, 2021 59 / 82 Soft-Margin SVM (Contd.) Reformulating the SVM problem by introducing slack variables ξi min ω,b,ξ 1 2∥ω∥2 + C For each i, αi = arg maxαi J (α1, · · · , αi−1, αi, αi+1, · · · , αm) For some αi, fix the other variables and re-optimize J (α) with respect to αi Feng Li (SDU) SVM December 28, 2021 72 / 82 Sequential0 码力 | 82 页 | 773.97 KB | 1 年前3
Lecture Notes on Gaussian Discriminant Analysis, Naive[n] and y ∈ [k]. In Naive Bayes (NB) model, the feature and label can be represented by random variables {Xj}j∈[n] and Y , respectively. Furthermore, for ∀j ̸= j′, Naive Bayes assumes Xj and Xj′ are conditionally sample space [v] = {1, 2, · · · , v} identically and independently. Let Xj and Y be the random variables representing the j-th feature and the label. We define p(t | y) = P(Xj = t | Y = y) for some j which we “guess” for each training data. Specifically, supposing X(i) and Z(i) are the random variables representing the features and the label of the i-th data sample, p(x(i); θ) = P(X(i) = x(i)) is0 码力 | 19 页 | 238.80 KB | 1 年前3
共 22 条
- 1
- 2
- 3













