Lecture 1: OverviewSeptember 6, 2023 14 / 57 Source of Training Data Provided random examples outside of the learner’s control. Negative examples available or only positive? Good training examples selected by a “benevolent” Classification and Regression Classification: finding decision boundaries Regression: fitting a curve/plane to data x t 0 1 −1 0 1 Feng Li (SDU) Overview September 6, 2023 25 / 57 Supervised Classification watching a given video on YouTube Predict the location in 3D space of a robot arm end effector, given control signals (torques) sent to its various motors Predict the amount of prostate specific antigen (PSA)0 码力 | 57 页 | 2.41 MB | 1 年前3
Lecture Notes on Linear Regression+ · · · + ✓1x1 + ✓0 Geometrically, when n = 1, h✓(x) is actually a line in a 2D plane, while h✓(x) represents a plane in a 3D space when n = 2. Generally, when n � 3, h✓(x) defines a so-called “hyperplane”0 码力 | 6 页 | 455.98 KB | 1 年前3
Lecture 3: Logistic RegressionX = x; θ) = 1/(1 + exp(−θTx)) The “score” θTx is also a measure of distance of x from the hyper- plane (the score is positive for pos. examples, and negative for neg. examples) High positive score: High0 码力 | 29 页 | 660.51 KB | 1 年前3
Lecture 2: Linear RegressionThe relationship between x and y is modeled as a linear function. The linear function in the 2D plane is a straight line. Hypothesis: hθ(x) = θ0 + θ1x (where θ0 and θ1 are parameters) Feng Li (SDU)0 码力 | 31 页 | 608.38 KB | 1 年前3
Lecture Notes on Support Vector Machinefli@sdu.edu.cn Shandong University, China 1 Hyperplane and Margin In a n-dimensional space, a hyper plane is defined by ωT x + b = 0 (1) where ω ∈ Rn is the outward pointing normal vector, and b is the0 码力 | 18 页 | 509.37 KB | 1 年前3
Lecture 5: Gaussian Discriminant Analysis, Naive Bayes→ R for ∀i = 1, · · · , m ∇f |q is “perpendicular” to all “constraint surface” ∇f |q is in the plane determined by ∇gi |q (i = 1, · · · , m) Feng Li (SDU) GDA, NB and EM September 27, 2023 60 / 1220 码力 | 122 页 | 1.35 MB | 1 年前3
《Efficient Deep Learning Book》[EDL] Chapter 4 - Efficient Architectureswe draw one such decision boundary. If we had more than two features we would need to draw a hyper-plane to separate the points in more than two dimensions. 3 Linear Separability - https://en.wikipedia0 码力 | 53 页 | 3.92 MB | 1 年前3
动手学深度学习 v2.0(Cer et al., 2017)。我们的目标是预测这些分 数。来自语义文本相似性基准数据集的样本包括(句子1,句子2,相似性得分): • “A plane is taking off.”(“一架飞机正在起飞。”),“An air plane is taking off.”(“一架飞机正在起 飞。”),5.000分; • “A woman is eating something.”(“一个女人在吃东西。”),“A0 码力 | 797 页 | 29.45 MB | 1 年前3
AI大模型千问 qwen 中文文档to(device) # Directly use generate() and tokenizer.decode() to get the output. # Use `max_new_tokens` to control the maximum output length. generated_ids = model.generate( model_inputs.input_ids, max_new_tokens=512 to(device) # Directly use generate() and tokenizer.decode() to get the output. # Use `max_new_tokens` to control the maximum output length. generated_ids = model.generate( model_inputs.input_ids, max_new_tokens=5120 码力 | 56 页 | 835.78 KB | 1 年前3
《Efficient Deep Learning Book》[EDL] Chapter 7 - Automationmaximizes the following objective: where is a weight factor defined as, such that and variables control the reward penalty for latency violation. In addition to the multiobjective optimization, Mnasnet0 码力 | 33 页 | 2.48 MB | 1 年前3
共 12 条
- 1
- 2













