积分充值
 首页
前端开发
AngularDartElectronFlutterHTML/CSSJavaScriptReactSvelteTypeScriptVue.js构建工具
后端开发
.NetC#C++C语言DenoffmpegGoIdrisJavaJuliaKotlinLeanMakefilenimNode.jsPascalPHPPythonRISC-VRubyRustSwiftUML其它语言区块链开发测试微服务敏捷开发架构设计汇编语言
数据库
Apache DorisApache HBaseCassandraClickHouseFirebirdGreenplumMongoDBMySQLPieCloudDBPostgreSQLRedisSQLSQLiteTiDBVitess数据库中间件数据库工具数据库设计
系统运维
AndroidDevOpshttpdJenkinsLinuxPrometheusTraefikZabbix存储网络与安全
云计算&大数据
Apache APISIXApache FlinkApache KarafApache KyuubiApache OzonedaprDockerHadoopHarborIstioKubernetesOpenShiftPandasrancherRocketMQServerlessService MeshVirtualBoxVMWare云原生CNCF机器学习边缘计算
综合其他
BlenderGIMPKiCadKritaWeblate产品与服务人工智能亿图数据可视化版本控制笔试面试
文库资料
前端
AngularAnt DesignBabelBootstrapChart.jsCSS3EchartsElectronHighchartsHTML/CSSHTML5JavaScriptJerryScriptJestReactSassTypeScriptVue前端工具小程序
后端
.NETApacheC/C++C#CMakeCrystalDartDenoDjangoDubboErlangFastifyFlaskGinGoGoFrameGuzzleIrisJavaJuliaLispLLVMLuaMatplotlibMicronautnimNode.jsPerlPHPPythonQtRPCRubyRustR语言ScalaShellVlangwasmYewZephirZig算法
移动端
AndroidAPP工具FlutterFramework7HarmonyHippyIoniciOSkotlinNativeObject-CPWAReactSwiftuni-appWeex
数据库
ApacheArangoDBCassandraClickHouseCouchDBCrateDBDB2DocumentDBDorisDragonflyDBEdgeDBetcdFirebirdGaussDBGraphGreenPlumHStreamDBHugeGraphimmudbIndexedDBInfluxDBIoTDBKey-ValueKitDBLevelDBM3DBMatrixOneMilvusMongoDBMySQLNavicatNebulaNewSQLNoSQLOceanBaseOpenTSDBOracleOrientDBPostgreSQLPrestoDBQuestDBRedisRocksDBSequoiaDBServerSkytableSQLSQLiteTiDBTiKVTimescaleDBYugabyteDB关系型数据库数据库数据库ORM数据库中间件数据库工具时序数据库
云计算&大数据
ActiveMQAerakiAgentAlluxioAntreaApacheApache APISIXAPISIXBFEBitBookKeeperChaosChoerodonCiliumCloudStackConsulDaprDataEaseDC/OSDockerDrillDruidElasticJobElasticSearchEnvoyErdaFlinkFluentGrafanaHadoopHarborHelmHudiInLongKafkaKnativeKongKubeCubeKubeEdgeKubeflowKubeOperatorKubernetesKubeSphereKubeVelaKumaKylinLibcloudLinkerdLonghornMeiliSearchMeshNacosNATSOKDOpenOpenEBSOpenKruiseOpenPitrixOpenSearchOpenStackOpenTracingOzonePaddlePaddlePolicyPulsarPyTorchRainbondRancherRediSearchScikit-learnServerlessShardingSphereShenYuSparkStormSupersetXuperChainZadig云原生CNCF人工智能区块链数据挖掘机器学习深度学习算法工程边缘计算
UI&美工&设计
BlenderKritaSketchUI设计
网络&系统&运维
AnsibleApacheAWKCeleryCephCI/CDCurveDevOpsGoCDHAProxyIstioJenkinsJumpServerLinuxMacNginxOpenRestyPrometheusServertraefikTrafficUnixWindowsZabbixZipkin安全防护系统内核网络运维监控
综合其它
文章资讯
 上传文档  发布文章  登录账户
IT文库
  • 综合
  • 文档
  • 文章

无数据

分类

全部云计算&大数据(23)机器学习(23)

语言

全部英语(13)中文(简体)(10)

格式

全部PDF文档 PDF(23)
 
本次搜索耗时 0.075 秒,为您找到相关结果约 23 个.
  • 全部
  • 云计算&大数据
  • 机器学习
  • 全部
  • 英语
  • 中文(简体)
  • 全部
  • PDF文档 PDF
  • 默认排序
  • 最新排序
  • 页数排序
  • 大小排序
  • 全部时间
  • 最近一天
  • 最近一周
  • 最近一个月
  • 最近三个月
  • 最近半年
  • 最近一年
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 2 - Compression Techniques

    representation. The quantized sine wave is a low precision representation which takes integer values in the range [0, 5]. As a result, the quantized wave requires low transmission bandwidth. Figure 2-3: Quantization using an example. Let’s assume we have a variable x which takes a 32-bit floating point value in the range [-10.0, 10.0]. We need to transmit a collection (vector) of these variables over an expensive communication to lie in this range. 2. Let us assume that the values of x will be uniformly distributed in this range. This means that all values of x are equally likely to lie in any part of the range from [xmin, xmax]
    0 码力 | 33 页 | 1.96 MB | 1 年前
    3
  • pdf文档 动手学深度学习 v2.0

    会看到,当x = 1时,导数u′是2。 def numerical_lim(f, x, h): return (f(x + h) - f(x)) / h h = 0.1 for i in range(5): print(f'h={h:.5f}, numerical limit={numerical_lim(f, 1, h):.5f}') h *= 0.1 h=0.10000, numerical estimates = cum_counts / cum_counts.sum(dim=1, keepdims=True) d2l.set_figsize((6, 4.5)) for i in range(6): d2l.plt.plot(estimates[:, i].numpy(), label=("P(die=" + str(i + 1) + ")")) d2l.plt.axhline(y=0 tolist() 现在我们可以对工作负载进行基准测试。 首先,我们使用for循环,每次执行一位的加法。 c = torch.zeros(n) timer = Timer() for i in range(n): c[i] = a[i] + b[i] f'{timer.stop():.5f} sec' '0.16749 sec' 或者,我们使用重载的+运算符来计算按元素的和。 timer
    0 码力 | 797 页 | 29.45 MB | 1 年前
    3
  • pdf文档 Keras: 基于 Python 的深度学习库

    TimeseriesGenerator import numpy as np 数据预处理 119 data = np.array([[i] for i in range(50)]) targets = np.array([[i] for i in range(50)]) data_gen = TimeseriesGenerator(data, targets, length=10, sampling_rate=2 zca_epsilon=1e-06, rotation_range=0.0, width_shift_range=0.0, height_shift_range=0.0, brightness_range=None, shear_range=0.0, zoom_range=0.0, channel_shift_range=0.0, fill_mode='nearest', cval=0 白化。 • rotation_range: 整数。随机旋转的度数范围。 • width_shift_range: 浮点数、一维数组或整数 – float: 如果 <1,则是除以总宽度的值,或者如果 >=1,则为像素值。 – 1-D 数组: 数组中的随机元素。 – int: 来自间隔 (-width_shift_range, +width_shift_range) 之间的整数个像素。 数据预处理
    0 码力 | 257 页 | 1.19 MB | 1 年前
    3
  • pdf文档 机器学习课程-温州大学-时间序列总结

    重采样 05 数据统计—滑动窗口 06 时序模型—ARIMA 19 创建固定频率的时间序列 Pandas中提供了一个date_range()函数,主要用 于生成一个具有固定频率的DatetimeIndex对象。 date_range(start = None, end = None, periods = None, freq = None, tz = None, normalize 数 至少要指定三个参数,否则会出现错误。 21 创建固定频率的时间序列 当调用date_range()函数创建DatetimeIndex对 象时,如果只是传入了开始日期(start参数)与 结束日期(end参数),则默认生成的时间戳是 按天计算的,即freq参数为D。 pd.date_range('2018/08/10', '2018/08/20') 22 创建固定频率的时间序列 如果只是传入了开始日期或结束日期,则还 date_range(start='2018/08/10', periods=5) pd.date_range(end='2018/08/10', periods=5) 23 创建固定频率的时间序列 如果希望时间序列中的时间戳都是每周固定 的星期日,则可以在创建DatetimeIndex时 将freq参数设为“W-SUN”。 dates_index = pd.date_range('2018-01-01'
    0 码力 | 67 页 | 1.30 MB | 1 年前
    3
  • pdf文档 【PyTorch深度学习-龙龙老师】-测试版202112

    + ?, ? ∼ ?( , . 12) 通过随机采样? = 1 次,可以获得?个样本的训练数据集?train,代码如下: data = []# 保存样本集的列表 for i in range(100): # 循环采样 100 个点 x = np.random.uniform(-10., 10.) # 随机采样输入 x # 采样高斯噪声 eps = np 训练集上的均方误差损失值。代码如下: def mse(b, w, points): # 根据当前的 w,b 参数计算均方差损失 totalError = 0 for i in range(0, len(points)): # 循环迭代所有点 x = points[i, 0] # 获得 i 号点的输入 x y = points[i, 1] # w,b b_gradient = 0 w_gradient = 0 M = float(len(points)) # 总样本数 for i in range(0, len(points)): x = points[i, 0] y = points[i, 1] # 误差函数对 b 的导数:grad_b
    0 码力 | 439 页 | 29.91 MB | 1 年前
    3
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 7 - Automation

    parameters. For example, horizontal flip is a boolean choice, rotation requires a fixed angle or a range of rotation, and random augment requires multiple parameters. Figure 7-1: The plethora of choices create_model(size=layer_size) opt = optimizers.SGD(learning_rate=learning_rate) losses = [] for iteration in range(2000): with tf.GradientTape() as tape: output = model(X) loss = tf.reduce_mean(tf.math.square(Y parameters, it samples a value randomly within the specified range. For example, given two hyperparameters and such that is real valued in range and , RS can generate an arbitrary number of trials. For
    0 码力 | 33 页 | 2.48 MB | 1 年前
    3
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 5 - Advanced Compression Techniques

    not uniformly distributed, i.e. the data is more likely to take values in a certain range than another equally sized range. It creates equal sized quantization ranges (bins), regardless of the frequency of original floating point domain between and into non-overlapping bins that collectively span the entire range, where is the number of bits allocated for quantization. All values within the same bin share the quantization bin boundary is denoted by a cross. This is not ideal because the precision allocated to the range [-4.0, -2.0] or [2.0, 4.0] (spanning two quantization bins) is the same as the precision allocated
    0 码力 | 34 页 | 3.18 MB | 1 年前
    3
  • pdf文档 全连接神经网络实战. pytorch 版

    10 轮,并且每轮会训练一次,然后测试一 次准确率。训练函数的输入是训练数据、神经网络体、损失函数计算体以及优化器;测试函数不 需要优化器: epochs = 10 f or t in range ( epochs ) : print ( f ”Epoch␣{ t+1}\n−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−” ) train_loop ( train_dataloader 3.2. 初始化网络权重-方法一 现在我们修改一下我们的程序。首先我们先训练模型并保存,然后再把导出的模型参数导入 到新模型中并测试正确率: epochs = 10 fo r t in range ( epochs ) : print ( f ”Epoch␣{ t+1}\n−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−” ) path = ’ ./ model ’ + s unsqueeze 转化为二维 tensor, 然后才能用于计算损失函数。 调用的程序与前面一样,我们一次将全部数据用于训练,并训练 1000 轮: epochs = 1000 f or t in range ( epochs ) : print ( f ”Epoch␣{ t+1}\n−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−” ) train_loop ( x_data , y_data
    0 码力 | 29 页 | 1.40 MB | 1 年前
    3
  • pdf文档 Experiment 1: Linear Regression

    intuition about linear regression.) To get the best viewing results on your surface plot, use the range of theta values that we suggest in the code skeleton below. J v a l s = zeros (100 , 100); % i n it’s time to select a learning rate α. The goal of this part is to pick a good learning rate in the range of 0.001 ≤ α ≤ 10 You will do this by making an initial selection, running gradient descent and ’Number of i t e r a t i o n s ’ ) ylabel ( ’ Cost J ’ ) If you picked a learning rate within a good range, your plot should appear like the figure below. 0 10 20 30 40 50 Number of iterations 0 1 2
    0 码力 | 7 页 | 428.11 KB | 1 年前
    3
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 4 - Efficient Architectures

    distribution, with a nice range, say [-0.05, 0.05]. Refer to Figure 4-8. Figure 4-8: Initializing the embedding table at the beginning with a uniform probability distribution in a reasonable range ([0.05, 0.05] in quality needs to be determined empirically, but d is often in hundreds for NLP problems, and N might range from thousands to millions. Now that we are familiar with generating embeddings, how do we use them can fit in less than 100MB of memory. Despite this reduction, it still may not be suitable for a range of mobile and edge devices. Do you recall a technique that can reduce it further? Yes, Quantization
    0 码力 | 53 页 | 3.92 MB | 1 年前
    3
共 23 条
  • 1
  • 2
  • 3
前往
页
相关搜索词
EfficientDeepLearningBookEDLChapterCompressionTechniques动手深度学习v2Keras基于Python机器课程温州大学时间序列总结PyTorch深度学习AutomationAdvanced连接神经网络神经网神经网络实战pytorchExperimentLinearRegressionArchitectures
IT文库
关于我们 文库协议 联系我们 意见反馈 免责声明
本站文档数据由用户上传或本站整理自互联网,不以营利为目的,供所有人免费下载和学习使用。如侵犯您的权益,请联系我们进行删除。
IT文库 ©1024 - 2025 | 站点地图
Powered By MOREDOC AI v3.3.0-beta.70
  • 关注我们的公众号【刻舟求荐】,给您不一样的精彩
    关注我们的公众号【刻舟求荐】,给您不一样的精彩