PaddleDTX 1.0.0 中文文档/xdb-cli --host http://127.0.0.1:8121 files upload -n paddlempc -m train_dataA4.csv -i ./train_dataA.csv --ext '{"FileType":"csv","Features":"id,CRIM,ZN,INDUS,CHAS,NOX,RM", "TotalRows":457}' -e '2021-12-10 -8e0965c708a8 --keyPath './keys' --conf ./testdata/executor/node1/conf/config.toml - o ./output.csv 各参数说明如下: –id: 预测任务的 id –keyPath: 默认取值’./keys’, 从该文件夹中读取私钥, 计算需求方的私钥, 表 明了计算需求方的身份, 可以用-k 参数直接指定私钥 参与模型训练样本文件 train_dataA.csv [https://github.com/PaddlePaddle/PaddleDTX/blob/master/dai/mpc/testdata/vl/linear_boston_housing/train _dataA.csv] ,任务执行节点 B 参与模型训练样本文件 train_dataB.csv [https://github.com/0 码力 | 53 页 | 1.36 MB | 1 年前3
PaddleDTX 1.1.0 中文文档/xdb-cli --host http://127.0.0.1:8121 files upload -n paddlempc -m train_dataA4.csv -i ./train_dataA.csv --ext '{"FileType":"csv","Features":"id,CRIM,ZN,INDUS,CHAS,NOX,RM", "TotalRows":457}' -e '2021-12-10 -8e0965c708a8 --keyPath './keys' --conf ./testdata/executor/node1/conf/config.toml - o ./output.csv 各参数说明如下: –id: 预测任务的 id –keyPath: 默认取值’./keys’, 从该文件夹中读取私钥, 计算需求方的私钥, 表 明了计算需求方的身份, 可以用-k 参数直接指定私钥 参与模型训练样本文件 train_dataA.csv [https://github.com/PaddlePaddle/PaddleDTX/blob/master/dai/mpc/testdata/vl/linear_boston_housing/train _dataA.csv] ,任务执行节点 B 参与模型训练样本文件 train_dataB.csv [https://github.com/0 码力 | 57 页 | 1.38 MB | 1 年前3
peewee Documentation Release 2.10.2Key/Value Store Shortcuts Signal support pwiz, a model generator Schema Migrations Reflection Database URL CSV Utils Connection pool Read Slaves Test Utils pskel Flask Utils API Reference Models Fields Query ews-digest-with-boolean-query- parser/]. Using peewee to explore CSV files [http://charlesleifer.com/blog/using-peewee-to-explore-csv- files/]. Structuring Flask apps with Peewee [http://charlesleifer basis, you can simply tell peewee to turn off auto_increment during the import: data = load_user_csv() # load up a bunch of data User._meta.auto_increment = False # turn off auto incrementing IDs with0 码力 | 275 页 | 276.96 KB | 1 年前3
PaddleDTX 1.0.0 中文文档/xdb-cli --host http://127.0.0.1:8121 files upload -n paddlempc -m train_dataA4. �→csv -i ./train_dataA.csv --ext '{"FileType":"csv","Features":"id,CRIM,ZN,INDUS,CHAS, �→NOX,RM", "TotalRows":457}' -e '2021-12-10 8e0965c708a8 --keyPath './ �→keys' --conf ./testdata/executor/node1/conf/config.toml -o ./output.csv 各参数说明如下: • –id: 预测任务的 id • –keyPath: 默认取值’./keys’, 从该文件夹中读取私钥, 计算需求方的私钥, 表明了计算需求方的身份, 可 以用-k 参数直接指定私钥 训练任务:任务执行节点 A 参与模型训练样本文件 train_dataA.csv ,任务执行节点 B 参与模型训练样 本文件 train_dataB.csv 2. 预测任务:任务执行节点 A 参与模型预测样本文件 predict_dataA.csv ,任务执行节点 B 参与模型预测 样本文件 predict_dataB.csv 7.2 测试脚本说明 本案例采用 paddledtx_test.sh0 码力 | 57 页 | 624.94 KB | 1 年前3
peewee Documentation
Release 2.10.2and Peewee. • Personalized news digest (with a boolean query parser!). • Using peewee to explore CSV files. • Structuring Flask apps with Peewee. • Creating a lastpass clone with Flask and Peewee. basis, you can simply tell peewee to turn off auto_increment during the import: data = load_user_csv() # load up a bunch of data User._meta.auto_increment = False # turn off auto incrementing IDs with when iterating over large result sets. # Let's assume we've got 10 million stat objects to dump to a csv file. stats = Stat.select() # Our imaginary serializer class serializer = CSVSerializer() # Loop0 码力 | 221 页 | 844.06 KB | 1 年前3
PaddleDTX 1.1.0 中文文档/xdb-cli --host http://127.0.0.1:8121 files upload -n paddlempc -m train_dataA4. �→csv -i ./train_dataA.csv --ext '{"FileType":"csv","Features":"id,CRIM,ZN,INDUS,CHAS, �→NOX,RM", "TotalRows":457}' -e '2021-12-10 8e0965c708a8 --keyPath './ �→keys' --conf ./testdata/executor/node1/conf/config.toml -o ./output.csv 各参数说明如下: • –id: 预测任务的 id • –keyPath: 默认取值’./keys’, 从该文件夹中读取私钥, 计算需求方的私钥, 表明了计算需求方的身份, 可 以用-k 参数直接指定私钥 训练任务:任务执行节点 A 参与模型训练样本文件 train_dataA.csv ,任务执行节点 B 参与模型训练样 本文件 train_dataB.csv 2. 预测任务:任务执行节点 A 参与模型预测样本文件 predict_dataA.csv ,任务执行节点 B 参与模型预测 样本文件 predict_dataB.csv 7.2 测试脚本说明 本案例采用 paddledtx_test.sh0 码力 | 65 页 | 687.09 KB | 1 年前3
Apache Unomi 1.x - Documentationprofile import and export feature in Apache Unomi is based on configurations and consumes or produces CSV files that contain profiles to be imported and exported. 4.1. IMPORTING PROFILES Only ftp, sftp, file:///tmp/?fileName=profiles.csv&move=.done&consumer.delay=25s Where: • fileName Can be a pattern, for example include=.*.csv instead of fileName=… to consume all CSV files. By default the processed SFTP source paths: sftp://USER@HOST/PATH?password=PASSWORD&include=.*.csv ftps://USER@HOST?password=PASSWORD&fileName=profiles.csv&passiveMode=true Where: Apache Unomi 1.x - Documentation - 51 • USER0 码力 | 158 页 | 3.65 MB | 1 年前3
Apache Unomi 2.x - Documentationprofile import and export feature in Apache Unomi is based on configurations and consumes or produces CSV files that contain profiles to be imported and exported. 9.1. IMPORTING PROFILES Only ftp , sftp file:///tmp/?fileName=profiles.csv&move=.done&consumer.delay=25s Where: fileName Can be a pattern, for example include=.*.csv instead of fileName=… to consume all CSV files. By default the processed SFTP source paths: sftp://USER@HOST/PATH?password=PASSWORD&include=.*.csv ftps://USER@HOST?password=PASSWORD&fileName=profiles.csv&passiveMode=true Where: USER is the user name of the SFTP/FTPS user0 码力 | 117 页 | 4.78 MB | 1 年前3
Falcon v3.0.0-b2 DocumentationOutputting CSV Files Generating a CSV (or PDF, etc.) report and making it available as a downloadable file is a fairly common back-end service task. The easiest approach is to simply write CSV rows to an writer = csv.writer(output, quoting=csv.QUOTE_NONNUMERIC) writer.writerow(('fruit', 'quantity')) writer.writerow(('apples', 13)) writer.writerow(('oranges', 37)) resp.content_type = 'text/csv' resp.downloadable_as = 'report.csv' resp.text = output.getvalue() 5.1. User Guide 69 Falcon Documentation, Release 3.0.0b2 class Report: async def on_get(self, req, resp): output = io.StringIO() writer = csv.writer(output0 码力 | 340 页 | 1.15 MB | 1 年前3
Falcon v3.0.0 DocumentationOutputting CSV Files Generating a CSV (or PDF, etc.) report and making it available as a downloadable file is a fairly common back-end service task. The easiest approach is to simply write CSV rows to an writer = csv.writer(output, quoting=csv.QUOTE_NONNUMERIC) writer.writerow(('fruit', 'quantity')) writer.writerow(('apples', 13)) writer.writerow(('oranges', 37)) resp.content_type = 'text/csv' resp.downloadable_as downloadable_as = 'report.csv' resp.text = output.getvalue() 5.1. User Guide 69 Falcon Documentation, Release 3.0.0 class Report: async def on_get(self, req, resp): output = io.StringIO() writer = csv.writer(output0 码力 | 344 页 | 1.16 MB | 1 年前3
共 633 条
- 1
- 2
- 3
- 4
- 5
- 6
- 64













