积分充值
 首页
前端开发
AngularDartElectronFlutterHTML/CSSJavaScriptReactSvelteTypeScriptVue.js构建工具
后端开发
.NetC#C++C语言DenoffmpegGoIdrisJavaJuliaKotlinLeanMakefilenimNode.jsPascalPHPPythonRISC-VRubyRustSwiftUML其它语言区块链开发测试微服务敏捷开发架构设计汇编语言
数据库
Apache DorisApache HBaseCassandraClickHouseFirebirdGreenplumMongoDBMySQLPieCloudDBPostgreSQLRedisSQLSQLiteTiDBVitess数据库中间件数据库工具数据库设计
系统运维
AndroidDevOpshttpdJenkinsLinuxPrometheusTraefikZabbix存储网络与安全
云计算&大数据
Apache APISIXApache FlinkApache KarafApache KyuubiApache OzonedaprDockerHadoopHarborIstioKubernetesOpenShiftPandasrancherRocketMQServerlessService MeshVirtualBoxVMWare云原生CNCF机器学习边缘计算
综合其他
BlenderGIMPKiCadKritaWeblate产品与服务人工智能亿图数据可视化版本控制笔试面试
文库资料
前端
AngularAnt DesignBabelBootstrapChart.jsCSS3EchartsElectronHighchartsHTML/CSSHTML5JavaScriptJerryScriptJestReactSassTypeScriptVue前端工具小程序
后端
.NETApacheC/C++C#CMakeCrystalDartDenoDjangoDubboErlangFastifyFlaskGinGoGoFrameGuzzleIrisJavaJuliaLispLLVMLuaMatplotlibMicronautnimNode.jsPerlPHPPythonQtRPCRubyRustR语言ScalaShellVlangwasmYewZephirZig算法
移动端
AndroidAPP工具FlutterFramework7HarmonyHippyIoniciOSkotlinNativeObject-CPWAReactSwiftuni-appWeex
数据库
ApacheArangoDBCassandraClickHouseCouchDBCrateDBDB2DocumentDBDorisDragonflyDBEdgeDBetcdFirebirdGaussDBGraphGreenPlumHStreamDBHugeGraphimmudbIndexedDBInfluxDBIoTDBKey-ValueKitDBLevelDBM3DBMatrixOneMilvusMongoDBMySQLNavicatNebulaNewSQLNoSQLOceanBaseOpenTSDBOracleOrientDBPostgreSQLPrestoDBQuestDBRedisRocksDBSequoiaDBServerSkytableSQLSQLiteTiDBTiKVTimescaleDBYugabyteDB关系型数据库数据库数据库ORM数据库中间件数据库工具时序数据库
云计算&大数据
ActiveMQAerakiAgentAlluxioAntreaApacheApache APISIXAPISIXBFEBitBookKeeperChaosChoerodonCiliumCloudStackConsulDaprDataEaseDC/OSDockerDrillDruidElasticJobElasticSearchEnvoyErdaFlinkFluentGrafanaHadoopHarborHelmHudiInLongKafkaKnativeKongKubeCubeKubeEdgeKubeflowKubeOperatorKubernetesKubeSphereKubeVelaKumaKylinLibcloudLinkerdLonghornMeiliSearchMeshNacosNATSOKDOpenOpenEBSOpenKruiseOpenPitrixOpenSearchOpenStackOpenTracingOzonePaddlePaddlePolicyPulsarPyTorchRainbondRancherRediSearchScikit-learnServerlessShardingSphereShenYuSparkStormSupersetXuperChainZadig云原生CNCF人工智能区块链数据挖掘机器学习深度学习算法工程边缘计算
UI&美工&设计
BlenderKritaSketchUI设计
网络&系统&运维
AnsibleApacheAWKCeleryCephCI/CDCurveDevOpsGoCDHAProxyIstioJenkinsJumpServerLinuxMacNginxOpenRestyPrometheusServertraefikTrafficUnixWindowsZabbixZipkin安全防护系统内核网络运维监控
综合其它
文章资讯
 上传文档  发布文章  登录账户
IT文库
  • 综合
  • 文档
  • 文章

无数据

分类

全部云计算&大数据(18)机器学习(18)

语言

全部英语(10)中文(简体)(8)

格式

全部PDF文档 PDF(18)
 
本次搜索耗时 0.067 秒,为您找到相关结果约 18 个.
  • 全部
  • 云计算&大数据
  • 机器学习
  • 全部
  • 英语
  • 中文(简体)
  • 全部
  • PDF文档 PDF
  • 默认排序
  • 最新排序
  • 页数排序
  • 大小排序
  • 全部时间
  • 最近一天
  • 最近一周
  • 最近一个月
  • 最近三个月
  • 最近半年
  • 最近一年
  • pdf文档 keras tutorial

    algorithm (CNN, RNN, etc.,) can be represented in a simple and efficient manner. The following diagram depicts the relationship between model, layer and core modules: Let us see the overview of Keras suitable for machine learning. We can use it in data preparation phase of machine learning.  Sequence processing: Provides functions to generate time based data from the given input data. We can use >>> model = Sequential() # apply a unshared weight convolution 1-dimension of length 3 to a sequence with # 10 timesteps, with 16 output filters >>> model.add(LocallyConnected1D(16, 3, input_shape=(10
    0 码力 | 98 页 | 1.57 MB | 1 年前
    3
  • pdf文档 动手学深度学习 v2.0

    'Gumbel', 'HalfCauchy', 'HalfNormal', (continues on next page) 43 https://en.wikipedia.org/wiki/Venn_diagram 44 https://en.wikipedia.org/wiki/Markov_chain 45 https://discuss.d2l.ai/t/1762 2.7. 查阅文档 81 (continued scalar value 1, with the shape defined by the variable argument size. Args: size (int...): a sequence of integers defining the shape of the output tensor. Can be a variable number of arguments or a Discussions112 9.5 机器翻译与数据集 语言模型是自然语言处理的关键,而机器翻译是语言模型最成功的基准测试。因为机器翻译正是将输入序列 转换成输出序列的 序列转换模型(sequence transduction)的核心问题。序列转换模型在各类现代人工智能 应用中发挥着至关重要的作用,因此我们将其做为本章剩余部分和 10节的重点。为此,本节将介绍机器翻译 问题及其后文需要使用的数据集。
    0 码力 | 797 页 | 29.45 MB | 1 年前
    3
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 4 - Efficient Architectures

    accuracy. Hence, this is a trade-off. We also ensure that the tokenized input results in an integer sequence with exactly 250 tokens. This might mean padding short texts with padding tokens and truncating tokenize, by truncating # the rest of the sequence. max_seq_len = 100 vectorization_layer = tf.keras.layers.TextVectorization( max_tokens=vocab_size, output_sequence_length=max_seq_len) Once we have initialized are confident will not be in the vocabulary. edl_sequence_output = vectorization_layer( [['efficient deep learning x123!']]).numpy()[0, :4] edl_sequence_output array([ 1, 1379, 1585, 1]) The code snippet
    0 码力 | 53 页 | 3.92 MB | 1 年前
    3
  • pdf文档 Keras: 基于 Python 的深度学习库

    . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 6.2.4 text_to_word_sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 6.3 图像预处理 . . . . . . . . . . . HDF5Matrix [source] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236 20.3 Sequence [source] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236 20.4 to_categorical 这是基于之前定义的视觉模型(权重被重用)构建的视频编码 encoded_frame_sequence = TimeDistributed(vision_model)(video_input) # 输出为向量的序列 encoded_video = LSTM(256)(encoded_frame_sequence) # 输出为一个向量 # 这是问题编码器的模型级表示,重复使用与之前相同的权重:
    0 码力 | 257 页 | 1.19 MB | 1 年前
    3
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 3 - Learning Techniques

    Language Toolkit (NLTK) and creates a text sequence from a sentence. from random import choice, randint from keras.preprocessing.text import text_to_word_sequence # NLTK Import try: from nltk.corpus import choice(synonyms(word) or [word]) original = 'We enjoyed our short vacation in Mexico' words = text_to_word_sequence(original) # Tokenize the sentence. Now, let’s go through the different text transformations with inserted. """ It inserts a synonym for every candidate word at a random position in the word sequence. """ def ins_transformation(words, candidates): for candidate in candidates: pos = randint(0
    0 码力 | 56 页 | 18.93 MB | 1 年前
    3
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 6 - Advanced Learning Techniques - Technical Review

    natural language inputs. Then by definition the model should be able to encode the given text in a sequence of embeddings such that there is some semantic relationship preserved between pieces of text that intractable. See figure 6-2 for a general theme that these tasks follow. If you consider to be a sequence that you can create from your unlabeled dataset, a few simple pretext tasks can be to predict the of the pretext task. This works well for domains like natural language where your data will be a sequence of tokens. You can extend the analogy to being a tensor of rank , and hide part of the input and
    0 码力 | 31 页 | 4.03 MB | 1 年前
    3
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 2 - Compression Techniques

    which eliminates ambiguity when decoding), we can easily construct the original sequence of symbols from the encoded sequence and the lookup table. Refer the wikipedia article on arithmetic coding to learn to a low precision domain. The following exercise will apply them to quantize an arbitrary data sequence. Exercise: Data Quantization Let's put our learnings from the previous exercise into practice.
    0 码力 | 33 页 | 1.96 MB | 1 年前
    3
  • pdf文档 深度学习与PyTorch入门实战 - 46. 时间序列表示

    时间序列表示 主讲人:龙良曲 Spatial Signals Temporal Signals? Sequence http://slazebni.cs.illinois.edu/spring17/lec02_rnn.pdf Sequence representation ▪ [seq_len, feature_len] [100, 1] [28, 28] [words, word_vec]
    0 码力 | 14 页 | 1.16 MB | 1 年前
    3
  • pdf文档 华为云深度学习在文本分类中的实践-李明磊

    Classification Matching Wordpiece Keras tokenizer Jieba Hanlp Model Saving Deployment Testing Vocab Sequence labeling Huawei tokenizer word2vec Elmo pb ckpt H5 (Keras) RESTful API RPC API Function --->simple Char replacement Synonym replacement Char filter Featurizer Classification/ Matching/ Sequence labeling TF model Sklearn model feature Countvectorizer Sentence encoder char ... Stop word
    0 码力 | 23 页 | 1.80 MB | 1 年前
    3
  • pdf文档 AI大模型千问 qwen 中文文档

    value of weight decay. • --adam_beta2: the value of β2 in Adam. • --model_max_length: the maximum sequence length. • --use_lora: whether to use LoRA. Adding --q_lora can enable Q-LoRA. • --gradient_checkpointing: field(default="adamw_torch") model_max_length: int = field( default=8192, metadata={ "help": "Maximum sequence length. Sequences will be right padded (and␣ �→possibly truncated)." }, ) use_lora: bool = False
    0 码力 | 56 页 | 835.78 KB | 1 年前
    3
共 18 条
  • 1
  • 2
前往
页
相关搜索词
kerastutorial动手深度学习v2EfficientDeepLearningBookEDLChapterArchitecturesKeras基于PythonTechniquesAdvancedTechnicalReviewCompressionPyTorch入门实战46时间序列表示华为文本分类实践李明磊AI模型千问qwen中文文档
IT文库
关于我们 文库协议 联系我们 意见反馈 免责声明
本站文档数据由用户上传或本站整理自互联网,不以营利为目的,供所有人免费下载和学习使用。如侵犯您的权益,请联系我们进行删除。
IT文库 ©1024 - 2025 | 站点地图
Powered By MOREDOC AI v3.3.0-beta.70
  • 关注我们的公众号【刻舟求荐】,给您不一样的精彩
    关注我们的公众号【刻舟求荐】,给您不一样的精彩