keras tutorialto your project root directory and type the below command to create virtual environment, python3 -m venv kerasenv After executing the above command, “kerasenv” directory is created with bin,lib and Windows 2. Keras ― Installation Keras 4 Windows user can use the below command, py -m venv keras Step 2: Activate the environment This step will configure python and pip folder and type the below command, $ cd kerasvenv kerasvenv $ source bin/activate Windows Windows users move inside the “kerasenv” folder and type the below command, .\env\Scripts\activate0 码力 | 98 页 | 1.57 MB | 1 年前3
PyTorch Release NotesPyTorch also includes standard defined neural network layers, deep learning optimizers, data loading utilities, and multi-gpu, and multi-node support. Functions are executed immediately instead of enqueued in your Docker ® environment must support NVIDIA GPUs. To run a container, issue the appropriate command as explained in Running A Container and specify the registry, repository, and tags. About this task runtime resources of the container by including additional flags and settings that are used with the command. These flags and settings are described in Running A Container. ‣ The GPUs are explicitly defined0 码力 | 365 页 | 2.94 MB | 1 年前3
Experiment 1: Linear Regressiondescent, we need to add the x0 = 1 intercept term to every example. To do this in Matlab/Octave, the command is m = length (y ) ; % st or e the number of t r a i n i n g examples x = [ ones (m, 1) , x ] ; iterations). After convergence, record the final values of θ0 and θ1 that you get, and plot the straight line fit from your algorithm on the same graph as your training data according to θ. The plotting commands between J and θ % Plot the surface p l o t % Because of the way meshgrids work in the s u r f command, we % need to transpose J v a l s before c a l l i n g surf , or e l s e the % axes w i l l be0 码力 | 7 页 | 428.11 KB | 1 年前3
Experiment 2: Logistic Regression and Newton's Methodclasses. In Matlab/Octave, you can separate the positive class and the negative class using the find command: % find returns the i n d i c e s of the % rows meeting the s p e c i f i e d condition pos = boundary is defined as the line where P(y = 1|x; θ) = g(θT x) = 0.5 which corresponds to θT x = 0 Plotting the decision boundary is equivalent to plotting the θT x = 0 line. When you are finished, your0 码力 | 4 页 | 196.41 KB | 1 年前3
《Efficient Deep Learning Book》[EDL] Chapter 2 - Compression Techniquestoo much? The original (pre-quantization) image is shown in figure 2-6. Get the image using this command: !wget https://github.com/reddragon/book-codelabs/raw/main/pia23378-16.jpeg Solution: First, we certain trade-offs. We hope that this chapter helps more deep learning models to cross the finish line. The next chapter will introduce learning techniques to improve quality metrics like accuracy and0 码力 | 33 页 | 1.96 MB | 1 年前3
动手学深度学习 v2.03,其中系数2是切线的斜率。 x = np.arange(0, 3, 0.1) plot(x, [f(x), 2 * x - 3], 'x', 'f(x)', legend=['f(x)', 'Tangent line (x=1)']) 2.4. 微积分 67 2.4.2 偏导数 到目前为止,我们只讨论了仅含一个变量的函数的微分。在深度学习中,函数通常依赖于许多变量。因此,我 们需要将微分的思想推广到多元函数(multivariate download('time_machine'), 'r') as f: lines = f.readlines() return [re.sub('[^A-Za-z]+', ' ', line).strip().lower() for line in lines] lines = read_time_machine() print(f'# 文本总行数: {len(lines)}') print(lines[0]) #@save """将文本行拆分为单词或字符词元""" if token == 'word': return [line.split() for line in lines] elif token == 'char': return [list(line) for line in lines] else: print('错误:未知词元类型:' + token) (continues on0 码力 | 797 页 | 29.45 MB | 1 年前3
Lecture Notes on Gaussian Discriminant Analysis, NaiveExpectation-Maximization (EM) algorithm. 6.1 Convex Sets and Convex Functions A set C is convex if the line segment between any two points in C lies in C, i.e., for ∀x1, x2 ∈ C and ∀θ with 0 ≤ θ ≤ 1, we have we have the inequality in the fifth line. The sixth equality also comes from Eq. (31) 13 To tighten the lower bound, we should let the equality (in the forth line) hold. According to Jensen’s inequality in particular holds for Qi = Q[t] i , according to Eq. (32). We have the inequality in the second line, because θ[t+1] is calculated by θ[t+1] = arg max θ � i � z(i)∈Ω Q[t] i (z(i)) log p(x(i), z(i);0 码力 | 19 页 | 238.80 KB | 1 年前3
如何利用深度学习提高高精地图生产的自动化率-邹亮�������������� ���� ���� ������(Lane line Detection) �� ���)�)�(� ����&���&��,���,� ������� �������� �� ���&�&��&� �&��,��� ����&����&��,���,� ������(Lane line Detection) g�R������f A��������) S�e�����)��������)��� Oc��������(����������������������)��������������� �����U���a�S�PeF ������(Lane line Detection)��� �������� (Sign Detection, Traffic Light Detection) �� ���������� �������� (Sign Detection0 码力 | 34 页 | 56.04 MB | 1 年前3
Lecture Notes on Support Vector Machinegi(ω∗) = 0 for ∀i = 1, 2, · · · , k. Another observation is that, since the inequality in the third line holds with equality, ω∗ actually minimizes L(ω, α∗, β ∗) over ω. 2.2.3 Karush-Kuhn-Tucker (KKT) Conditions following, we take α1 and α2 for example to explain the optimization process of the SMO algorithm (i.e., Line 4 in Algorithm 1). By treating α1 and α2 as variables while the others as known quantities, the objective ?! + ?! (b) y(1)y(2) = 1 Figure 7: α+ 1 and α+ 2 . which confines the optimization to be on a line. Since 0 ≤ α1, α2 ≤ C, we can derive a lower bound L and an upper bound H for them. As shown in Fig0 码力 | 18 页 | 509.37 KB | 1 年前3
AI大模型千问 qwen 中文文档rank0_print("Loading data...") train_data = [] with open(data_args.data_path, "r") as f: for line in f: train_data.append(json.loads(line)) train_dataset = dataset_cls(train_data, tokenizer=tokenizer, max_len=max_len) eval_data_path: eval_data = [] with open(data_args.eval_data_path, "r") as f: for line in f: eval_data.append(json.loads(line)) eval_dataset = dataset_cls(eval_data, tokenizer=tokenizer, max_len=max_len)0 码力 | 56 页 | 835.78 KB | 1 年前3
共 20 条
- 1
- 2













