《Efficient Deep Learning Book》[EDL] Chapter 4 - Efficient Architecturesthe word itself. Let’s discuss each step in detail. Step 1: Vocabulary Creation In this step, we create a vocabulary of the top words10 (ordered by frequency) from the given training corpus. We would ( ) is chosen, the dataset is preprocessed (lowercase, strip punctuation, normalization etc.) to create pairs of input context (neighboring words), and the label (masked word to be predicted). The word Step 1: Vocabulary Creation In this step, we will use a TextVectorization layer from Tensorflow to create a vocabulary of the most relevant words. It finds the top N words in a dataset, sorts them in the0 码力 | 53 页 | 3.92 MB | 1 年前3
keras tutorial........................................................................................... 63 Create a Multi-Layer Perceptron ANN ................................................................... creating neural networks. Keras is based on minimal structure that provides a clean and easy way to create deep learning models based on TensorFlow or Theano. Keras is designed to quickly define deep learning installation is quite easy. Follow below steps to properly install Keras on your system. Step 1: Create virtual environment Virtualenv is used to manage Python packages for different projects. This will0 码力 | 98 页 | 1.57 MB | 1 年前3
pandas: powerful Python data analysis toolkit - 0.14.0usually sacrifices performance. So if you focus on one feature for your application you may be able to create a faster specialized tool. • pandas is a dependency of statsmodels, making it an important part sql functions. To connect with SQLAlchemy you use the create_engine() function to create an engine object from database URI. You only need to create the engine once per database you are connecting to. For data analysis toolkit, Release 0.14.0 In [43]: from sqlalchemy import create_engine # Create your connection. In [44]: engine = create_engine(’sqlite:///:memory:’) This engine can then be used to write0 码力 | 1349 页 | 7.67 MB | 1 年前3
pandas: powerful Python data analysis toolkit - 0.15usually sacrifices performance. So if you focus on one feature for your application you may be able to create a faster specialized tool. • pandas is a dependency of statsmodels, making it an important part arithmetic and comparisons (GH8813, GH5963, GH5436). • sql_schema now generates dialect appropriate CREATE TABLE statements (GH8697) • slice string method now takes step into account (GH8754) • Bug in BlockManager sql functions. To connect with SQLAlchemy you use the create_engine() function to create an engine object from database URI. You only need to create the engine once per database you are connecting to. For0 码力 | 1579 页 | 9.15 MB | 1 年前3
pandas: powerful Python data analysis toolkit - 0.15.1usually sacrifices performance. So if you focus on one feature for your application you may be able to create a faster specialized tool. • pandas is a dependency of statsmodels, making it an important part powerful Python data analysis toolkit, Release 0.15.1 the create_engine() function to create an engine object from database URI. You only need to create the engine once per database you are connecting to. For For an in-memory sqlite database: In [43]: from sqlalchemy import create_engine # Create your connection. In [44]: engine = create_engine(’sqlite:///:memory:’) This engine can then be used to write0 码力 | 1557 页 | 9.10 MB | 1 年前3
pandas: powerful Python data analysis toolkit - 0.17.0usually sacrifices performance. So if you focus on one feature for your application you may be able to create a faster specialized tool. • pandas is a dependency of statsmodels, making it an important part argument must specified to True. Google BigQuery Enhancements • Added ability to automatically create a table/dataset using the pandas.io.gbq.to_gbq() function if the destination table/dataset does not df.B.cat.categories Out[4]: Index([u'c', u'a', u'b'], dtype='object') setting the index, will create create a CategoricalIndex In [5]: df2 = df.set_index('B') In [6]: df2.index Out[6]: CategoricalIndex([u'a'0 码力 | 1787 页 | 10.76 MB | 1 年前3
pandas: powerful Python data analysis toolkit - 0.13.1usually sacrifices performance. So if you focus on one feature for your application you may be able to create a faster specialized tool. • pandas is a dependency of statsmodels, making it an important part is now in the API documentation, see the docs • json_normalize() is a new method to allow you to create a flat table from semi-structured JSON data. See the docs (GH1067) • Added PySide support for the by select_column(key,column).unique() – min_itemsize parameter to append will now automatically create data_columns for passed keys 1.4.8 Enhancements • Improved performance of df.to_csv() by up to0 码力 | 1219 页 | 4.81 MB | 1 年前3
《Efficient Deep Learning Book》[EDL] Chapter 3 - Learning Techniquestransformations applied separately result in a dataset 3x the original size. Can we apply N transformations to create a dataset Nx the size? What are the constraining factors? An image transformation recomputes the import layers, optimizers, metrics DROPOUT_RATE = 0.2 LEARNING_RATE = 0.0002 NUM_CLASSES = 102 def create_model(): # Initialize the core model core_args = dict(input_shape=(IMG_SIZE, IMG_SIZE, 3), include_top=False) include_top=False) core = apps.resnet50.ResNet50(**core_args) core.trainable = False # Create the full model with input, preprocessing, core and softmax layers. model = tf.keras.Sequential([ layers.Input([IMG_SIZE0 码力 | 56 页 | 18.93 MB | 1 年前3
pandas: powerful Python data analysis toolkit - 0.12usually sacrifices performance. So if you focus on one feature for your application you may be able to create a faster specialized tool. • pandas is a dependency of statsmodels, making it an important part by select_column(key,column).unique() – min_itemsize parameter to append will now automatically create data_columns for passed keys 1.2.8 Enhancements • Improved performance of df.to_csv() by up to time-series plots. • added option display.max_seq_items to control the number of elements printed per sequence pprinting it. (GH2979) • added option display.chop_threshold to control display of small numerical0 码力 | 657 页 | 3.58 MB | 1 年前3
pandas: powerful Python data analysis toolkit - 1.0.0agnostic (it can play a similar role to a pip and virtualenv combination). Miniconda allows you to create a minimal self contained Python installation, and then use the Conda command to install additional running the Miniconda will do this for you. The installer can be found here The next step is to create a new conda environment. A conda environment is like a virtualenv that allows you to specify a specific set of libraries. Run the following commands from a terminal window: conda create -n name_of_my_env python This will create a minimal environment with only Python installed in it. To put your self inside0 码力 | 3015 页 | 10.78 MB | 1 年前3
共 397 条
- 1
- 2
- 3
- 4
- 5
- 6
- 40













