积分充值
 首页
前端开发
AngularDartElectronFlutterHTML/CSSJavaScriptReactSvelteTypeScriptVue.js构建工具
后端开发
.NetC#C++C语言DenoffmpegGoIdrisJavaJuliaKotlinLeanMakefilenimNode.jsPascalPHPPythonRISC-VRubyRustSwiftUML其它语言区块链开发测试微服务敏捷开发架构设计汇编语言
数据库
Apache DorisApache HBaseCassandraClickHouseFirebirdGreenplumMongoDBMySQLPieCloudDBPostgreSQLRedisSQLSQLiteTiDBVitess数据库中间件数据库工具数据库设计
系统运维
AndroidDevOpshttpdJenkinsLinuxPrometheusTraefikZabbix存储网络与安全
云计算&大数据
Apache APISIXApache FlinkApache KarafApache KyuubiApache OzonedaprDockerHadoopHarborIstioKubernetesOpenShiftPandasrancherRocketMQServerlessService MeshVirtualBoxVMWare云原生CNCF机器学习边缘计算
综合其他
BlenderGIMPKiCadKritaWeblate产品与服务人工智能亿图数据可视化版本控制笔试面试
文库资料
前端
AngularAnt DesignBabelBootstrapChart.jsCSS3EchartsElectronHighchartsHTML/CSSHTML5JavaScriptJerryScriptJestReactSassTypeScriptVue前端工具小程序
后端
.NETApacheC/C++C#CMakeCrystalDartDenoDjangoDubboErlangFastifyFlaskGinGoGoFrameGuzzleIrisJavaJuliaLispLLVMLuaMatplotlibMicronautnimNode.jsPerlPHPPythonQtRPCRubyRustR语言ScalaShellVlangwasmYewZephirZig算法
移动端
AndroidAPP工具FlutterFramework7HarmonyHippyIoniciOSkotlinNativeObject-CPWAReactSwiftuni-appWeex
数据库
ApacheArangoDBCassandraClickHouseCouchDBCrateDBDB2DocumentDBDorisDragonflyDBEdgeDBetcdFirebirdGaussDBGraphGreenPlumHStreamDBHugeGraphimmudbIndexedDBInfluxDBIoTDBKey-ValueKitDBLevelDBM3DBMatrixOneMilvusMongoDBMySQLNavicatNebulaNewSQLNoSQLOceanBaseOpenTSDBOracleOrientDBPostgreSQLPrestoDBQuestDBRedisRocksDBSequoiaDBServerSkytableSQLSQLiteTiDBTiKVTimescaleDBYugabyteDB关系型数据库数据库数据库ORM数据库中间件数据库工具时序数据库
云计算&大数据
ActiveMQAerakiAgentAlluxioAntreaApacheApache APISIXAPISIXBFEBitBookKeeperChaosChoerodonCiliumCloudStackConsulDaprDataEaseDC/OSDockerDrillDruidElasticJobElasticSearchEnvoyErdaFlinkFluentGrafanaHadoopHarborHelmHudiInLongKafkaKnativeKongKubeCubeKubeEdgeKubeflowKubeOperatorKubernetesKubeSphereKubeVelaKumaKylinLibcloudLinkerdLonghornMeiliSearchMeshNacosNATSOKDOpenOpenEBSOpenKruiseOpenPitrixOpenSearchOpenStackOpenTracingOzonePaddlePaddlePolicyPulsarPyTorchRainbondRancherRediSearchScikit-learnServerlessShardingSphereShenYuSparkStormSupersetXuperChainZadig云原生CNCF人工智能区块链数据挖掘机器学习深度学习算法工程边缘计算
UI&美工&设计
BlenderKritaSketchUI设计
网络&系统&运维
AnsibleApacheAWKCeleryCephCI/CDCurveDevOpsGoCDHAProxyIstioJenkinsJumpServerLinuxMacNginxOpenRestyPrometheusServertraefikTrafficUnixWindowsZabbixZipkin安全防护系统内核网络运维监控
综合其它
文章资讯
 上传文档  发布文章  登录账户
IT文库
  • 综合
  • 文档
  • 文章

无数据

分类

全部云计算&大数据(20)机器学习(20)

语言

全部英语(18)中文(简体)(2)

格式

全部PDF文档 PDF(20)
 
本次搜索耗时 0.024 秒,为您找到相关结果约 20 个.
  • 全部
  • 云计算&大数据
  • 机器学习
  • 全部
  • 英语
  • 中文(简体)
  • 全部
  • PDF文档 PDF
  • 默认排序
  • 最新排序
  • 页数排序
  • 大小排序
  • 全部时间
  • 最近一天
  • 最近一周
  • 最近一个月
  • 最近三个月
  • 最近半年
  • 最近一年
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 4 - Efficient Architectures

    parameters of large NLP models15. In this situation, embedding-free approaches like pQRNN16 are a viable alternative. pQRNN uses the projection operation which maps a given input token to a B-bit fingerprint
    0 码力 | 53 页 | 3.92 MB | 1 年前
    3
  • pdf文档 keras tutorial

    penalties on the weights during optimization process. To summarise, Keras layer requires below minimum details to create a complete layer.  Shape of the input data  Number of neurons / units in dimension and 2 denotes third dimension MinMaxNorm Constrains weights to be norm between specified minimum and maximum values. from keras.models import Sequential from keras.layers import Activation, Dense used to merge a list of inputs. It supports add(), subtract(), multiply(), average(), maximum(), minimum(), concatenate() and dot() functionalities. Adding a layer It is used to add two layers. Syntax
    0 码力 | 98 页 | 1.57 MB | 1 年前
    3
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 7 - Automation

    four trials with four pairs of hyperparameter values. The hyperparameter values which achieve the minimum loss are the winners. Let's start by importing the relevant libraries and creating a random classification from the trial set. Each model is trained for 2000 iterations. At the end of a trial, we record the minimum loss achieved with the associated hyperparameters. search_results = [] for trial_id, (layer_size Loss: 0.11825279891490936 As we can see from the trial results, the last trial #3 achieves the minimum loss value. This exercise demonstrates the essence of HPO which is to perform trials with different
    0 码力 | 33 页 | 2.48 MB | 1 年前
    3
  • pdf文档 PyTorch Brand Guidelines

    symbol, never exceed a minimum of 24 pixels for screen or 10mm for print. This ensures consistency and legibility of the symbol. Minimum Screen Size: 24px Minimum Print Size: 10mm 5 Brand
    0 码力 | 12 页 | 34.16 MB | 1 年前
    3
  • pdf文档 Lecture 2: Linear Regression

    decreases fastest if one goes from θ in the direction of the negative gradient of J at θ Find a local minimum of a differentiable function using gradient descent Algorithm 1 Gradient Descent 1: Given a starting decrease for each iteration Usually, SGD has θ approaching the minimum much faster than batch GD SGD may never converge to the minimum, and oscillating may happen A variants: Mini-batch, say pick up
    0 码力 | 31 页 | 608.38 KB | 1 年前
    3
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 2 - Compression Techniques

    given vector x.""" def quantize(x, x_min, x_max, b): # Clamp x to lie in [x_min, x_max]. x = np.minimum(x, x_max) x = np.maximum(x, x_min) # Compute scale as discussed. scale = get_scale(x_min, x_max floor((x - x_min) / scale) # Clamping the quantized value to be less than (2^b - 1). x_q = np.minimum(x_q, 2**b - 1) # Return x_q as an unsigned integer. 1 Deep Learning with Python by Francois Chollet number of dimensions of that tensor. 1. Given a 32-bit floating-point weight matrix, we can map the minimum weight value xmin to 0, and the maximum weight value xmax to 2b-1 (b is the number of bits of precision
    0 码力 | 33 页 | 1.96 MB | 1 年前
    3
  • pdf文档 Lecture Notes on Linear Regression

    Gradient Descent (GD) method is a first-order iterative optimization algorithm for finding the minimum of a function. If the multi-variable function J(✓) is di↵erentiable in a neighborhood of a point Fig. 2. The colored contours represent the objective function, and GD algorithm converges into the minimum step-by- step. The choice of the step size ↵ actually has a very important influence on the convergence
    0 码力 | 6 页 | 455.98 KB | 1 年前
    3
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 5 - Advanced Compression Techniques

    in [-4, 4]. However quantization linearly assigns precision to all ranges equally based on the minimum and maximum values it observes for . Each quantization bin boundary is denoted by a cross. This a regularizing effect18 due to dropping spurious connections. 18 https://en.wikipedia.org/wiki/Minimum_description_length Apart from the most commonly used magnitude-based pruning, there are other heuristics
    0 码力 | 34 页 | 3.18 MB | 1 年前
    3
  • pdf文档 Experiment 2: Logistic Regression and Newton's Method

    gradient descent method has a very slow convergence rate and may take a long while to achieve the minimum. 2. What values of θ did you get after achieving the convergence? 3. Calculate L(θ) in each iteration
    0 码力 | 4 页 | 196.41 KB | 1 年前
    3
  • pdf文档 Machine Learning

    decreases fastest if one goes from θ in the direction of the negative gradient of L at θ • Find a local minimum of a differentiable function using gradient descent θj ← θj − α∂L(θ) ∂θj , ∀j where α is so-called
    0 码力 | 19 页 | 944.40 KB | 1 年前
    3
共 20 条
  • 1
  • 2
前往
页
相关搜索词
EfficientDeepLearningBookEDLChapterArchitectureskerastutorialAutomationPyTorchBrandGuidelinesLectureLinearRegressionCompressionTechniquesNotesonAdvancedExperimentLogisticandNewtonMethodMachine
IT文库
关于我们 文库协议 联系我们 意见反馈 免责声明
本站文档数据由用户上传或本站整理自互联网,不以营利为目的,供所有人免费下载和学习使用。如侵犯您的权益,请联系我们进行删除。
IT文库 ©1024 - 2025 | 站点地图
Powered By MOREDOC AI v3.3.0-beta.70
  • 关注我们的公众号【刻舟求荐】,给您不一样的精彩
    关注我们的公众号【刻舟求荐】,给您不一样的精彩