积分充值
 首页
前端开发
AngularDartElectronFlutterHTML/CSSJavaScriptReactSvelteTypeScriptVue.js构建工具
后端开发
.NetC#C++C语言DenoffmpegGoIdrisJavaJuliaKotlinLeanMakefilenimNode.jsPascalPHPPythonRISC-VRubyRustSwiftUML其它语言区块链开发测试微服务敏捷开发架构设计汇编语言
数据库
Apache DorisApache HBaseCassandraClickHouseFirebirdGreenplumMongoDBMySQLPieCloudDBPostgreSQLRedisSQLSQLiteTiDBVitess数据库中间件数据库工具数据库设计
系统运维
AndroidDevOpshttpdJenkinsLinuxPrometheusTraefikZabbix存储网络与安全
云计算&大数据
Apache APISIXApache FlinkApache KarafApache KyuubiApache OzonedaprDockerHadoopHarborIstioKubernetesOpenShiftPandasrancherRocketMQServerlessService MeshVirtualBoxVMWare云原生CNCF机器学习边缘计算
综合其他
BlenderGIMPKiCadKritaWeblate产品与服务人工智能亿图数据可视化版本控制笔试面试
文库资料
前端
AngularAnt DesignBabelBootstrapChart.jsCSS3EchartsElectronHighchartsHTML/CSSHTML5JavaScriptJerryScriptJestReactSassTypeScriptVue前端工具小程序
后端
.NETApacheC/C++C#CMakeCrystalDartDenoDjangoDubboErlangFastifyFlaskGinGoGoFrameGuzzleIrisJavaJuliaLispLLVMLuaMatplotlibMicronautnimNode.jsPerlPHPPythonQtRPCRubyRustR语言ScalaShellVlangwasmYewZephirZig算法
移动端
AndroidAPP工具FlutterFramework7HarmonyHippyIoniciOSkotlinNativeObject-CPWAReactSwiftuni-appWeex
数据库
ApacheArangoDBCassandraClickHouseCouchDBCrateDBDB2DocumentDBDorisDragonflyDBEdgeDBetcdFirebirdGaussDBGraphGreenPlumHStreamDBHugeGraphimmudbIndexedDBInfluxDBIoTDBKey-ValueKitDBLevelDBM3DBMatrixOneMilvusMongoDBMySQLNavicatNebulaNewSQLNoSQLOceanBaseOpenTSDBOracleOrientDBPostgreSQLPrestoDBQuestDBRedisRocksDBSequoiaDBServerSkytableSQLSQLiteTiDBTiKVTimescaleDBYugabyteDB关系型数据库数据库数据库ORM数据库中间件数据库工具时序数据库
云计算&大数据
ActiveMQAerakiAgentAlluxioAntreaApacheApache APISIXAPISIXBFEBitBookKeeperChaosChoerodonCiliumCloudStackConsulDaprDataEaseDC/OSDockerDrillDruidElasticJobElasticSearchEnvoyErdaFlinkFluentGrafanaHadoopHarborHelmHudiInLongKafkaKnativeKongKubeCubeKubeEdgeKubeflowKubeOperatorKubernetesKubeSphereKubeVelaKumaKylinLibcloudLinkerdLonghornMeiliSearchMeshNacosNATSOKDOpenOpenEBSOpenKruiseOpenPitrixOpenSearchOpenStackOpenTracingOzonePaddlePaddlePolicyPulsarPyTorchRainbondRancherRediSearchScikit-learnServerlessShardingSphereShenYuSparkStormSupersetXuperChainZadig云原生CNCF人工智能区块链数据挖掘机器学习深度学习算法工程边缘计算
UI&美工&设计
BlenderKritaSketchUI设计
网络&系统&运维
AnsibleApacheAWKCeleryCephCI/CDCurveDevOpsGoCDHAProxyIstioJenkinsJumpServerLinuxMacNginxOpenRestyPrometheusServertraefikTrafficUnixWindowsZabbixZipkin安全防护系统内核网络运维监控
综合其它
文章资讯
 上传文档  发布文章  登录账户
IT文库
  • 综合
  • 文档
  • 文章

无数据

分类

全部云计算&大数据(294)VirtualBox(113)OpenShift(75)Apache Kyuubi(44)Pandas(32)机器学习(5)RocketMQ(4)Apache Karaf(4)云原生CNCF(3)Harbor(3)

语言

全部英语(200)中文(简体)(93)中文(简体)(1)

格式

全部PDF文档 PDF(272)其他文档 其他(22)
 
本次搜索耗时 0.725 秒,为您找到相关结果约 294 个.
  • 全部
  • 云计算&大数据
  • VirtualBox
  • OpenShift
  • Apache Kyuubi
  • Pandas
  • 机器学习
  • RocketMQ
  • Apache Karaf
  • 云原生CNCF
  • Harbor
  • 全部
  • 英语
  • 中文(简体)
  • 中文(简体)
  • 全部
  • PDF文档 PDF
  • 其他文档 其他
  • 默认排序
  • 最新排序
  • 页数排序
  • 大小排序
  • 全部时间
  • 最近一天
  • 最近一周
  • 最近一个月
  • 最近三个月
  • 最近半年
  • 最近一年
  • pdf文档 Hadoop 3.0以及未来

    Hadoop 3.0以及未来 刘 轶 自我简介 • Apache Hadoop的committer和顷目管理委员会成员。 • ebay的Paid IM(互联网市场)部门架构师,领导ebay产品广告、互 联网市场数据和实验平台的架构设计。负责领导使用Hadoop、 Spark、Kafka、Cassandra等开源大数据顷目建立ebay的广告和数 据平台。 • 加入ebay前,在intel工作6年,大数据架构师,负责领导大数据的 MapReduce Paper HBase Hive Cloudera创立 Hortonworks创立 Hadoop 1.0发布 Hadoop 2.0 GA Spark成为顶级顷目 Hadoop 3.0 2017 Hadoop生态系统 文件存储层 HDFS 资源/任务调度 YARN 计算引擎MapReduce 计算引擎Spark NoSQL HBase 数据仓 库SQL 机器/深 度学习
    0 码力 | 33 页 | 841.56 KB | 1 年前
    3
  • pdf文档 Apache Karaf 3.0.5 Guides

    PAX Exam to write integration tests when developing applications using Karaf. Starting with Karaf 3.0 we've also included a component briding between Karaf and Pax Exam making it easier to write integration Exception { assertTrue(true); } } COMMANDS Basically the Pax Exam - Karaf bridge introduced with 3.0 should support all commands you know from Pax Exam 2.x. In addition we've added various additional
    0 码力 | 203 页 | 534.36 KB | 1 年前
    3
  • pdf文档 pandas: powerful Python data analysis toolkit - 0.24.0

    __git_version__ will return git commit sha of current build (GH21295). • Compatibility with Matplotlib 3.0 (GH22790). • Added Interval.overlaps(), arrays.IntervalArray.overlaps(), and IntervalIndex. overlaps() default integer index: In [3]: s = pd.Series([1, 3, 5, np.nan, 6, 8]) In [4]: s Out[4]: 0 1.0 1 3.0 2 5.0 3 NaN 4 6.0 5 8.0 dtype: float64 Creating a DataFrame by passing a NumPy array, with a 590397 5 1.0 2013-01-03 -0.238464 -0.486944 0.015596 5 2.0 2013-01-04 -0.274925 0.823338 -0.761681 5 3.0 2013-01-05 1.092525 1.164866 -0.846108 5 4.0 2013-01-06 1.001105 0.071785 -1.067411 5 5.0 A where
    0 码力 | 2973 页 | 9.90 MB | 1 年前
    3
  • pdf文档 pandas: powerful Python data analysis toolkit - 0.25.1

    default integer index: In [3]: s = pd.Series([1, 3, 5, np.nan, 6, 8]) In [4]: s Out[4]: 0 1.0 1 3.0 2 5.0 3 NaN 4 6.0 5 8.0 dtype: float64 Creating a DataFrame by passing a NumPy array, with a 226064 5 1.0 2013-01-03 -0.136964 -0.276600 -0.614256 5 2.0 2013-01-04 0.066430 0.886690 1.544564 5 3.0 2013-01-05 0.996132 0.368752 1.232876 5 4.0 2013-01-06 -0.827664 0.620576 -0.247042 5 5.0 A where -1.0 2013-01-03 -0.136964 -0.276600 -0.614256 -5 -2.0 2013-01-04 -0.066430 -0.886690 -1.544564 -5 -3.0 2013-01-05 -0.996132 -0.368752 -1.232876 -5 -4.0 2013-01-06 -0.827664 -0.620576 -0.247042 -5 -5.0
    0 码力 | 2833 页 | 9.65 MB | 1 年前
    3
  • pdf文档 pandas: powerful Python data analysis toolkit - 0.25.0

    default integer index: In [3]: s = pd.Series([1, 3, 5, np.nan, 6, 8]) In [4]: s Out[4]: 0 1.0 1 3.0 2 5.0 3 NaN 4 6.0 5 8.0 dtype: float64 Creating a DataFrame by passing a NumPy array, with a 446934 5 1.0 2013-01-03 -0.918029 -1.032644 1.599718 5 2.0 2013-01-04 -1.236791 -0.438204 0.101452 5 3.0 2013-01-05 -1.632181 -0.992838 0.741029 5 4.0 2013-01-06 0.017195 -1.035754 0.960719 5 5.0 A where -1.0 2013-01-03 -0.918029 -1.032644 -1.599718 -5 -2.0 2013-01-04 -1.236791 -0.438204 -0.101452 -5 -3.0 2013-01-05 -1.632181 -0.992838 -0.741029 -5 -4.0 2013-01-06 -0.017195 -1.035754 -0.960719 -5 -5.0
    0 码力 | 2827 页 | 9.62 MB | 1 年前
    3
  • pdf文档 pandas: powerful Python data analysis toolkit - 1.4.2

    the environment in which read_orc() can work. System Conda PyPI Linux Successful Failed(pyarrow==3.0 Successful) macOS Successful Failed Windows Failed Failed Access data in the cloud Dependency Minimum Out[11]: v1 v2 by1 by2 1 95 5.0 55.0 99 5.0 55.0 2 95 7.0 77.0 99 NaN NaN big damp 3.0 33.0 blue dry 3.0 33.0 red red 4.0 44.0 wet 1.0 11.0 For more details and examples see the groupby documentation ) + [val]) for x, val in np.ndenumerate(a)]) Out[29]: 0 1 2 3 0 0 0 0 1.0 1 0 0 1 2.0 2 0 0 2 3.0 3 0 0 3 4.0 4 0 1 0 5.0 .. .. .. .. ... 19 1 1 3 20.0 20 1 2 0 21.0 21 1 2 1 22.0 22 1 2 2 23
    0 码力 | 3739 页 | 15.24 MB | 1 年前
    3
  • pdf文档 pandas: powerful Python data analysis toolkit - 1.4.4

    the environment in which read_orc() can work. System Conda PyPI Linux Successful Failed(pyarrow==3.0 Successful) macOS Successful Failed Windows Failed Failed Access data in the cloud Dependency Minimum Out[11]: v1 v2 by1 by2 1 95 5.0 55.0 99 5.0 55.0 2 95 7.0 77.0 99 NaN NaN big damp 3.0 33.0 blue dry 3.0 33.0 red red 4.0 44.0 wet 1.0 11.0 For more details and examples see the groupby documentation ) + [val]) for x, val in np.ndenumerate(a)]) Out[29]: 0 1 2 3 0 0 0 0 1.0 1 0 0 1 2.0 2 0 0 2 3.0 3 0 0 3 4.0 4 0 1 0 5.0 .. .. .. .. ... 19 1 1 3 20.0 20 1 2 0 21.0 21 1 2 1 22.0 22 1 2 2 23
    0 码力 | 3743 页 | 15.26 MB | 1 年前
    3
  • pdf文档 pandas: powerful Python data analysis toolkit - 1.3.3

    the environment in which read_orc() can work. System Conda PyPI Linux Successful Failed(pyarrow==3.0 Successful) macOS Successful Failed Windows Failed Failed Access data in the cloud Dependency Minimum Out[11]: v1 v2 by1 by2 1 95 5.0 55.0 99 5.0 55.0 2 95 7.0 77.0 99 NaN NaN big damp 3.0 33.0 blue dry 3.0 33.0 red red 4.0 44.0 wet 1.0 11.0 For more details and examples see the groupby documentation ) + [val]) for x, val in np.ndenumerate(a)]) Out[29]: 0 1 2 3 0 0 0 0 1.0 1 0 0 1 2.0 2 0 0 2 3.0 3 0 0 3 4.0 4 0 1 0 5.0 .. .. .. .. ... 19 1 1 3 20.0 20 1 2 0 21.0 21 1 2 1 22.0 22 1 2 2 23
    0 码力 | 3603 页 | 14.65 MB | 1 年前
    3
  • pdf文档 pandas: powerful Python data analysis toolkit - 1.3.4

    the environment in which read_orc() can work. System Conda PyPI Linux Successful Failed(pyarrow==3.0 Successful) macOS Successful Failed Windows Failed Failed Access data in the cloud Dependency Minimum Out[11]: v1 v2 by1 by2 1 95 5.0 55.0 99 5.0 55.0 2 95 7.0 77.0 99 NaN NaN big damp 3.0 33.0 blue dry 3.0 33.0 red red 4.0 44.0 wet 1.0 11.0 For more details and examples see the groupby documentation ) + [val]) for x, val in np.ndenumerate(a)]) Out[29]: 0 1 2 3 0 0 0 0 1.0 1 0 0 1 2.0 2 0 0 2 3.0 3 0 0 3 4.0 4 0 1 0 5.0 .. .. .. .. ... 19 1 1 3 20.0 20 1 2 0 21.0 21 1 2 1 22.0 22 1 2 2 23
    0 码力 | 3605 页 | 14.68 MB | 1 年前
    3
  • pdf文档 pandas: powerful Python data analysis toolkit - 1.3.2

    the environment in which read_orc() can work. System Conda PyPI Linux Successful Failed(pyarrow==3.0 Successful) macOS Successful Failed Windows Failed Failed 1.4. Tutorials 11 pandas: powerful Python Out[11]: v1 v2 by1 by2 1 95 5.0 55.0 99 5.0 55.0 2 95 7.0 77.0 99 NaN NaN big damp 3.0 33.0 blue dry 3.0 33.0 red red 4.0 44.0 wet 1.0 11.0 For more details and examples see the groupby documentation ) + [val]) for x, val in np.ndenumerate(a)]) Out[29]: 0 1 2 3 0 0 0 0 1.0 1 0 0 1 2.0 2 0 0 2 3.0 3 0 0 3 4.0 4 0 1 0 5.0 .. .. .. .. ... 19 1 1 3 20.0 20 1 2 0 21.0 21 1 2 1 22.0 22 1 2 2 23
    0 码力 | 3509 页 | 14.01 MB | 1 年前
    3
共 294 条
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 30
前往
页
相关搜索词
Hadoop3.0以及未来ApacheKarafGuidespandaspowerfulPythondataanalysistoolkit0.240.251.41.3
IT文库
关于我们 文库协议 联系我们 意见反馈 免责声明
本站文档数据由用户上传或本站整理自互联网,不以营利为目的,供所有人免费下载和学习使用。如侵犯您的权益,请联系我们进行删除。
IT文库 ©1024 - 2025 | 站点地图
Powered By MOREDOC AI v3.3.0-beta.70
  • 关注我们的公众号【刻舟求荐】,给您不一样的精彩
    关注我们的公众号【刻舟求荐】,给您不一样的精彩