积分充值
 首页
前端开发
AngularDartElectronFlutterHTML/CSSJavaScriptReactSvelteTypeScriptVue.js构建工具
后端开发
.NetC#C++C语言DenoffmpegGoIdrisJavaJuliaKotlinLeanMakefilenimNode.jsPascalPHPPythonRISC-VRubyRustSwiftUML其它语言区块链开发测试微服务敏捷开发架构设计汇编语言
数据库
Apache DorisApache HBaseCassandraClickHouseFirebirdGreenplumMongoDBMySQLPieCloudDBPostgreSQLRedisSQLSQLiteTiDBVitess数据库中间件数据库工具数据库设计
系统运维
AndroidDevOpshttpdJenkinsLinuxPrometheusTraefikZabbix存储网络与安全
云计算&大数据
Apache APISIXApache FlinkApache KarafApache KyuubiApache OzonedaprDockerHadoopHarborIstioKubernetesOpenShiftPandasrancherRocketMQServerlessService MeshVirtualBoxVMWare云原生CNCF机器学习边缘计算
综合其他
BlenderGIMPKiCadKritaWeblate产品与服务人工智能亿图数据可视化版本控制笔试面试
文库资料
前端
AngularAnt DesignBabelBootstrapChart.jsCSS3EchartsElectronHighchartsHTML/CSSHTML5JavaScriptJerryScriptJestReactSassTypeScriptVue前端工具小程序
后端
.NETApacheC/C++C#CMakeCrystalDartDenoDjangoDubboErlangFastifyFlaskGinGoGoFrameGuzzleIrisJavaJuliaLispLLVMLuaMatplotlibMicronautnimNode.jsPerlPHPPythonQtRPCRubyRustR语言ScalaShellVlangwasmYewZephirZig算法
移动端
AndroidAPP工具FlutterFramework7HarmonyHippyIoniciOSkotlinNativeObject-CPWAReactSwiftuni-appWeex
数据库
ApacheArangoDBCassandraClickHouseCouchDBCrateDBDB2DocumentDBDorisDragonflyDBEdgeDBetcdFirebirdGaussDBGraphGreenPlumHStreamDBHugeGraphimmudbIndexedDBInfluxDBIoTDBKey-ValueKitDBLevelDBM3DBMatrixOneMilvusMongoDBMySQLNavicatNebulaNewSQLNoSQLOceanBaseOpenTSDBOracleOrientDBPostgreSQLPrestoDBQuestDBRedisRocksDBSequoiaDBServerSkytableSQLSQLiteTiDBTiKVTimescaleDBYugabyteDB关系型数据库数据库数据库ORM数据库中间件数据库工具时序数据库
云计算&大数据
ActiveMQAerakiAgentAlluxioAntreaApacheApache APISIXAPISIXBFEBitBookKeeperChaosChoerodonCiliumCloudStackConsulDaprDataEaseDC/OSDockerDrillDruidElasticJobElasticSearchEnvoyErdaFlinkFluentGrafanaHadoopHarborHelmHudiInLongKafkaKnativeKongKubeCubeKubeEdgeKubeflowKubeOperatorKubernetesKubeSphereKubeVelaKumaKylinLibcloudLinkerdLonghornMeiliSearchMeshNacosNATSOKDOpenOpenEBSOpenKruiseOpenPitrixOpenSearchOpenStackOpenTracingOzonePaddlePaddlePolicyPulsarPyTorchRainbondRancherRediSearchScikit-learnServerlessShardingSphereShenYuSparkStormSupersetXuperChainZadig云原生CNCF人工智能区块链数据挖掘机器学习深度学习算法工程边缘计算
UI&美工&设计
BlenderKritaSketchUI设计
网络&系统&运维
AnsibleApacheAWKCeleryCephCI/CDCurveDevOpsGoCDHAProxyIstioJenkinsJumpServerLinuxMacNginxOpenRestyPrometheusServertraefikTrafficUnixWindowsZabbixZipkin安全防护系统内核网络运维监控
综合其它
文章资讯
 上传文档  发布文章  登录账户
IT文库
  • 综合
  • 文档
  • 文章

无数据

分类

全部云计算&大数据(16)机器学习(16)

语言

全部英语(9)中文(简体)(7)

格式

全部PDF文档 PDF(16)
 
本次搜索耗时 0.086 秒,为您找到相关结果约 16 个.
  • 全部
  • 云计算&大数据
  • 机器学习
  • 全部
  • 英语
  • 中文(简体)
  • 全部
  • PDF文档 PDF
  • 默认排序
  • 最新排序
  • 页数排序
  • 大小排序
  • 全部时间
  • 最近一天
  • 最近一周
  • 最近一个月
  • 最近三个月
  • 最近半年
  • 最近一年
  • pdf文档 亚马逊AWSAI Services Overview

    scale (Amazon EMR + Spark) Mapillary • Computer vision for crowd sourced maps Hudl • Predictive analytics on sports plays Upserve • Restaurant table mgmt & POS for forecasting customer traffic TuSimple scalable solution, it also offers potential to speed time to market for a new generation of voice and text interactions such as our recently launched Capital One skill for Alexa.” “As a heavy user of AWS
    0 码力 | 56 页 | 4.97 MB | 1 年前
    3
  • pdf文档 AI大模型千问 qwen 中文文档

    quality data for aligning to human preferences. Qwen is capable of natural language understanding, text generation, vision understanding, audio understanding, tool use, role play, playing as AI agent, etc {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(device) # Directly "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True (续下页) 6 Chapter 1. 文档 Qwen (接上页) ) model_inputs = tokenizer([text], return_tensors="pt")
    0 码力 | 56 页 | 835.78 KB | 1 年前
    3
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 3 - Learning Techniques

    eliminating the need to generate samples beforehand. Next, we will discuss a few label invariant image and text transformation techniques. Image Transformations This discussion is organized into the following hands-on project awaits towards the end of the text transformation section as well. Text Transformations Like the image transformations, there are simple text transformation techniques that would make a Natural Language Toolkit (NLTK) and creates a text sequence from a sentence. from random import choice, randint from keras.preprocessing.text import text_to_word_sequence # NLTK Import try: from nltk
    0 码力 | 56 页 | 18.93 MB | 1 年前
    3
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 4 - Efficient Architectures

    knowing all the encyclopedic data about them. When working with deep learning models and inputs such as text, which are not in numerical format, having an algorithmic way to meaningfully represent these inputs the following goals: a) To compress the information content of high-dimensional concepts such as text, image, audio, video, etc. to a low-dimensional representation such as a fixed length vector of floating From the perspective of training the model, it is agnostic to what the embedding is for (a piece of text, audio, image, video, or some abstract concept). Here is a quick recipe to train embedding-based
    0 码力 | 53 页 | 3.92 MB | 1 年前
    3
  • pdf文档 PyTorch Release Notes

    accuracy. This model script is available on GitHub and NGC. ‣ Tacotron 2 and WaveGlow v1.1 model: This text-to-speech (TTS) system is a combination of the following neural network models: ‣ A modified Tacotron accuracy. This model script is available on GitHub and NGC. ‣ Tacotron 2 and WaveGlow v1.1 model: This text-to-speech (TTS) system is a combination of the following neural network models: ‣ A modified Tacotron performance of their models with visualization by using the DLProf Viewer in a web browser or by analyzing text reports. DL Prof is available on NGC or through a Python PIP wheel installation. ‣ The TensorCore
    0 码力 | 365 页 | 2.94 MB | 1 年前
    3
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 6 - Advanced Learning Techniques - Technical Review

    the model should be able to encode the given text in a sequence of embeddings such that there is some semantic relationship preserved between pieces of text that are related. “A very happy birthday” and Wikipedia pages. 4 Howard, Jeremy and Sebastian Ruder. "Universal Language Model Fine-tuning for Text Classification." arXiv, 18 Jan. 2018, doi:10.48550/arXiv.1801.06146. Figure 6-6: Validation error those efficiency gains. We will work on the AGNews dataset (the same that we used in chapter 4) for text classification using a pre-trained BERT model, and demonstrate better quality and faster convergence
    0 码力 | 31 页 | 4.03 MB | 1 年前
    3
  • pdf文档 【PyTorch深度学习-龙龙老师】-测试版202112

    1.6.4 常用编辑器安装 使用 Python 语言编写程序的方式非常多,可以使用 ipython 或者 ipython notebook 方式 交互式编写代码,也可以利用 Sublime Text、PyCharm 和 VS Code 等综合 IDE 开发中大型 项目。本书推荐使用 PyCharm 编写和调试,使用 VS Code 交互式开发,这两者都可以免费 使用,用户自行下载安装,并配置好 个单词)和文本的标签信息(正、负面评价) TEXT = data.Field(tokenize='spacy', fix_length=80) LABEL = data.LabelField(dtype=torch.float) # 自动下载、加载、切割 IMDB 数据集 train_data, test_data = datasets.IMDB.splits(TEXT, LABEL) print('len print('example text: ', train_data.examples[15].text) # 随机打印一条句子的内容 print('example label: ', train_data.examples[15].label) # 打印这条句子的标签 # 构建词汇表,并分词编码,仅考虑 10000 个单词,耗时约 5 分钟 TEXT.build_vocab(train_data
    0 码力 | 439 页 | 29.91 MB | 1 年前
    3
  • pdf文档 Keras: 基于 Python 的深度学习库

    2.3 one_hot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 6.2.4 text_to_word_sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 6.3 图像预处理 . . . . 大小的 1D Numpy 数组,其中第 i 项是排名为 i 的单词的采样概率。 数据预处理 122 6.2 文本预处理 6.2.1 Tokenizer keras.preprocessing.text.Tokenizer(num_words=None, filters='!"#$%&()*+,-./:;<=>?@[\]^_`{|}~', lower=True, split=' ', char_level=False word_index 中,并用于在 text_to_sequence 调用期 间替换词汇表外的单词。 默认情况下,删除所有标点符号,将文本转换为空格分隔的单词序列(单词可能包含 ' 字 符)。这些序列然后被分割成标记列表。然后它们将被索引或向量化。 0 是不会被分配给任何单词的保留索引。 6.2.2 hashing_trick keras.preprocessing.text.hashing_trick(text
    0 码力 | 257 页 | 1.19 MB | 1 年前
    3
  • pdf文档 动手学深度学习 v2.0

    #@save """返回Fashion-MNIST数据集的文本标签""" text_labels = ['t-shirt', 'trouser', 'pullover', 'dress', 'coat', 'sandal', 'shirt', 'sneaker', 'bag', 'ankle boot'] return [text_labels[int(i)] for i in labels] 我 path.join(data_dir, 'fra.txt'), 'r', encoding='utf-8') as f: return f.read() raw_text = read_data_nmt() print(raw_text[:75]) Downloading ../data/fra-eng.zip from http://d2l-data.s3-accelerate.amazonaws 步骤。例如,我们用空格代替不间断空格(non‐breaking space),使用小写字母替换大写字母,并在单词和标点符号之间插入空格。 #@save def preprocess_nmt(text): (continues on next page) 113 http://www.manythings.org/anki/ 358 9. 现代循环神经网络 (continued from
    0 码力 | 797 页 | 29.45 MB | 1 年前
    3
  • pdf文档 keras tutorial

    Choose an algorithm, which will best fit for the type of learning process (e.g image classification, text processing, etc.,) and the available input data. Algorithm is represented by Model in Keras. Algorithm the training itself (EarlyStopping method) based on some condition.  Text processing: Provides functions to convert text into NumPy array suitable for machine learning. We can use it in data preparation how Keras supports each concept. Input shape In machine learning, all type of input data like text, images or videos will be first converted into array of numbers and then feed into the algorithm
    0 码力 | 98 页 | 1.57 MB | 1 年前
    3
共 16 条
  • 1
  • 2
前往
页
相关搜索词
亚马亚马逊AWSAIServicesOverviewAI模型千问qwen中文文档EfficientDeepLearningBookEDLChapterTechniquesArchitecturesPyTorchReleaseNotesAdvancedTechnicalReview深度学习Keras基于Python深度学习动手v2kerastutorial
IT文库
关于我们 文库协议 联系我们 意见反馈 免责声明
本站文档数据由用户上传或本站整理自互联网,不以营利为目的,供所有人免费下载和学习使用。如侵犯您的权益,请联系我们进行删除。
IT文库 ©1024 - 2025 | 站点地图
Powered By MOREDOC AI v3.3.0-beta.70
  • 关注我们的公众号【刻舟求荐】,给您不一样的精彩
    关注我们的公众号【刻舟求荐】,给您不一样的精彩