Python Libraries for Machine Learning: TensorFlow

Introduction

 
In the previous chapter, we studied Python Seaborn, its functions, and its python implementations.
 
In this chapter, we will start with the next very useful and important Python Machine Learning library "Python Tensorflow".
 

What is Python TensorFlow? 

 
TensorFlow is a free and open-source software library for dataflow and differentiable programming across a range of tasks. It is a symbolic math library and is also used for machine learning applications such as neural networks. It is used for both research and production at Google.
 
Its flexible architecture allows for the easy deployment of computation across a variety of platforms (CPUs, GPUs, TPUs), and from desktops to clusters of servers to mobile and edge devices.
 
TensorFlow computations are expressed as stateful dataflow graphs. The name TensorFlow derives from the operations that such neural networks perform on multidimensional data arrays, which are referred to as tensors. During the Google I/O Conference in June 2016, Jeff Dean stated that 1,500 repositories on GitHub mentioned TensorFlow, of which only 5 were from Google.
 
In Jan 2018, Google announced TensorFlow 2.0. In March 2018, Google announced TensorFlow.js version 1.0 for machine learning in JavaScript and TensorFlow Graphics for deep learning in computer graphics.
 
TensorFlow was developed by the Google Brain team for internal Google use. It was released under the Apache License 2.0 on November 9, 2015. The official website is www.tensorflow.org.
 

Key Terms 

 

Tensor

 
Tensorflow's name is directly derived from its core framework: Tensor. In Tensorflow, all the computations involve tensors. A tensor is a vector or matrix of n-dimensions that represents all types of data. All values in a tensor hold identical data type with a known (or partially known) shape. The shape of the data is the dimensionality of the matrix or array.
 
A tensor can be originated from the input data or the result of a computation. In TensorFlow, all the operations are conducted inside a graph. The graph is a set of computation that takes place successively. Each operation is called an op node and is connected to each other.
 
The graph outlines the ops and connections between the nodes. However, it does not display the values. The edge of the nodes in the tensor, i.e., a way to populate the operation with data.
 

Graphs

 
TensorFlow makes use of a graph framework. The graph gathers and describes all the series computations done during the training. The graph has lots of advantages:
  • It was done to run on multiple CPUs or GPUs and even mobile operating system
  • The portability of the graph allows to preserve the computations for immediate or later use. The graph can be saved to be executed in the future.
  • All the computations in the graph are done by connecting tensors together.
  • A tensor has a node and an edge. The node carries the mathematical operation and produces endpoints outputs. The edges explain the input/output relationships between nodes.
Try to read about graph theory, so as to understand TensorFlow better and to have a deep understanding of the concept
 

DistBelief

 
Starting in 2011, Google Brain built DistBelief as a proprietary machine-learning system based on deep learning neural networks. Its use grew rapidly across diverse Alphabet companies in both research and commercial applications. Google assigned multiple computer scientists, including Jeff Dean, to simplify and refactor the codebase of DistBelief into a faster, more robust application-grade library, which became TensorFlow. In 2009, the team, led by Geoffrey Hinton, had implemented generalized backpropagation and other improvements that allowed the generation of neural networks with substantially higher accuracy, for instance, a 25% reduction in errors in speech recognition.
 

CPU

 
Central Processing Unit abbreviation CPU, is the electronic circuitry, which works as brains of the computer that perform the basic arithmetic, logical, control and input/output operations specified by the instructions of a computer program.
 

GPU

 
GPU, the Graphics Processing Unit is a specialized electronic circuit designed to render 2D and 3D graphics together with a CPU. GPU is also known as Graphics Card in the Gammer's culture. Now GPUs are being harnessed more broadly to accelerate computational workloads in areas such as financial modeling, cutting-edge scientific research, deep learning, analytics and oil, and gas exploration, etc.
 

TPU

 
In May 2016, Google announced its Tensor Processing Unit (TPU), an application-specific integrated circuit (a hardware chip) built specifically for machine learning and tailored for TensorFlow. TPU is a programmable AI accelerator designed to provide high throughput of low-precision arithmetic (e.g., 8-bit), and oriented toward using or running models rather than training them. Google announced they had been running TPUs inside their data centers for more than a year and had found them to deliver an order of magnitude better-optimized performance per watt for machine learning.
 
In May 2017, Google announced the second-generation, as well as the availability of the TPUs on Google Compute Engine. The second-generation TPUs delivers up to 180 teraflops of performance, and when organized into clusters of 64 TPUs, provide up to 11.5 petaflops.
 
In May 2018, Google announced the third-generation TPUs delivering up to 420 teraflops of performance and 128 GB HBM. Cloud TPU v3 Pods offer 100+ petaflops of performance and 32 TB HBM.
 

Edge TPU

 
In July 2018, the Edge TPU was announced. Edge TPU is Google’s purpose-built ASIC chip designed to run TensorFlow Lite machine learning (ML) models on small client computing devices such as smartphones known as edge computing.
 

TensorFlow Lite

 
In May 2017, Google announced a software stack specifically for mobile development, TensorFlow Lite. In January 2019, the TensorFlow team released a developer preview of the mobile GPU inference engine with OpenGL ES 3.1 Compute Shaders on Android devices and Metal Compute Shaders on iOS devices. In May 2019, Google announced that its TensorFlow Lite Micro (also known as TensorFlow Lite for Microcontrollers) and ARM's uTensor would be merging.
 

Pixel Visual Core (PVC)

 
In October 2017, Google released the Google Pixel 2 which featured their Pixel Visual Core (PVC), a fully programmable image, vision, and AI processor for mobile devices. The PVC supports TensorFlow for machine learning (and Halide for image processing).
 

TensorFlow Session 

 
A session will execute the operation from the graph. To feed the graph with the values of a tensor, you need to open a session. Inside a session, you must run an operator to create an output.
 

Tensorflow Architecture Components

 

1. TensorFlow Servables

 
These are the central rudimentary units in TensorFlow Serving. TensorFlow Servables are the objects that clients use to perform the computation.
 
The size of a servable is flexible. A single servable might include anything from a lookup table to a single model to a tuple of inference models. Servables can be of any type and interface, enabling flexibility and future improvements such as:
  • Streaming results
  • Experimental APIs
  • Asynchronous modes of operation

2. TensorFlow Servable Versions

 
TensorFlow Serving can handle one or more versions of a servable, over the lifetime of a single server instance. This opens the door for fresh algorithm configurations, weights, and other data to be loaded over time. They also enable more than one version of a servable to be loaded concurrently, supporting gradual roll-out and experimentation. At serving time, clients may request either the latest version or a specific version id, for a particular model.
 

3. TensorFlow Servable Streams 

 
A sequence of versions of a servable sorted by increasing version numbers. 
 

4. TensorFlow Models 

 
A Serving represents a model as one or more servables. A machine-learned model may include one or more algorithms (including learned weights) and lookup or embedding tables. A servable can also serve as a fraction of a model, for example, a large lookup table can be served as many instances.
 

5. TensorFlow Loaders 

 
Loaders manage a servable’s life cycle. The Loader API enables common infrastructure independent from specific learning algorithms, data, or product use-cases involved. Specifically, Loaders standardize the APIs for loading and unloading a servable. 
 

6. Sources in Tensorflow Architecture 

 
Sources are in simple terms, modules that find and provide servables. Each Source provides zero or more servable streams. For each servable stream, a Source supplies one Loader instance for each version it makes available to be loaded. 
 

7. TensorFlow Managers 

 
Tensorflow Managers handle the full lifecycle of Servables, including:
  • Loading Servables
  • Serving Servables
  • Unloading Servables
Managers listen to sources and track all versions. The Manager tries to fulfill Sources’ requests but, may refuse to load an aspired version. Managers may also postpone an “unload”. For example, a Manager may wait to unload until a newer version finishes loading, based on a policy to guarantee that at least one version is loaded at all times. For example, GetServableHandle(), for clients to access loaded servable instances.
 

8. TensorFlow Core 

 
Using the standard TensorFlow Serving APIs, TensorFlow Serving Core manages the following aspects of servables:
  • lifecycle
  • metrics
TensorFlow Serving Core treats servables and loaders as opaque objects.
 

9. TensorFlow Batcher

 
Batching of multiple requests into a single request can significantly reduce the cost of performing inference, especially in the presence of hardware accelerators such as GPUs. TensorFlow Serving includes a request batching widget that lets clients easily batch their type-specific inferences across requests into batch requests that algorithm systems can more efficiently process.  
 

Life Cycle of TensorFlow Servable 

 
1. Sources create Loaders for Servable Versions, then Loaders are sent as Aspired Versions to the Manager, which loads and serves them to client requests. 
2. The Loader contains whatever metadata it needs to load the Servable.
3. The Source uses a callback to notify the manager of the Aspired Version. 
4. The manager applies the configured Version Policy to determine the next action to take.  
5. If the manager determines that it’s safe, it gives the Loader the required resources and tells the Loader to load the new version. 
6. Clients ask the manager for the Servable, either specifying a version explicitly or just requesting the latest version. The manager returns a handle for the Servable. The Dynamic Manager applies the Version Policy and decides to load the new version. 
7. The Dynamic Manager tells the Loader that there is enough memory. The Loader instantiates the TensorFlow graph with the new weights. 
8. A client requests a handle to the latest version of the model, and the Dynamic Manager returns a handle to the new version of the Servable. 
 

Installing Python Tensorflow 

 
1. Ubuntu/Linux
  1. sudo apt update -y                        
  2. sudo apt upgrade -y                        
  3. sudo apt install python3-tk python3-pip -y                        
  4.       
  5. pip install tensorflow      # Python 2.7; CPU support (no GPU support)      
  6. pip3 install tensorflow     # Python 3.n; CPU support (no GPU support)      
  7. pip install tensorflow-gpu  # Python 2.7;  GPU support      
  8. pip3 install tensorflow-gpu # Python 3.n; GPU support    
If the above commands fail, it may be possible that you are using an old binary to run the following commands
  1. sudo pip  install --upgrade tfBinaryURL   # Python 2.7      
  2. sudo pip3 install --upgrade tfBinaryURL   # Python 3.n     
2. Using Docker
 
1. Install Docker on your system if not installed already, using the link 
2. For GPU support on Linux, install nvidia-docker. The latest version of Docker includes native support for GPUs and nvidia-docker is not necessary. 
3. The official TensorFlow Docker images are located in the tensorflow/tensorflow Docker Hub repository
4. The following downloads TensorFlow release images to your machine: 
  1. docker pull tensorflow/tensorflow                     # latest stable release  
  2. docker pull tensorflow/tensorflow:devel-gpu           # nightly dev release w/ GPU support  
  3. docker pull tensorflow/tensorflow:latest-gpu-jupyter  # latest release w/ GPU support and Jupyter  
To run a TensorFlow Docker image, execute the following command
  1. docker run [-it] [--rm] [-p hostPort:containerPort] tensorflow/tensorflow[:tag] [command]  
For details on executing docker images, see the docker run reference.
 
3. Anaconda Prompt
  1. conda create -n tensorflow_env tensorflow  
  2. conda activate tensorflow_env #for CPU  
Use the above command when installing on CPUs 
  1. conda create -n tensorflow_gpuenv tensorflow-gpu  
  2. conda activate tensorflow_gpuenv  
Use the above command when installing on GPUs  
 

Commonly Implemented Algorithms in TensorFlow 

 
The following are a list of algorithms and their corresponding TensorFlow functions
  • Linear regression: tf.estimator.LinearRegressor
  • Classification:tf.estimator.LinearClassifier
  • Deep learning classification: tf.estimator.DNNClassifier
  • Deep learning wipe and deep: tf.estimator.DNNLinearCombinedClassifier
  • Booster tree regression: tf.estimator.BoostedTreesRegressor
  • Boosted tree classification: tf.estimator.BoostedTreesClassifier 

Creating a Tensor

 
Following is the procedure of creating a Tensor 
 
Syntax
 
tf.constant(value, dtype, name = "")
 
arguments 
      -`value`: Value of n dimension to define the tensor.
Optional - `
   dtype`: Define the type of data:
      - `tf.string`: String variable
      - `tf.float32`: Flot variable
      - `tf.int16`: Integer variable
      - "name": Name of the tensor.
Optional.
   By default, `Const_1:0`  
 

1. to create a tensor of dimension 0

  1. ## rank 0  
  2. # Default name  
  3. import tensorflow as tf  
  4. r1 = tf.constant(1, tf.int16)   
  5. print(r1)  
  6. r2 = tf.constant(1, tf.int16, name = "my_scalar")   
  7. print(r2)  
The output of the above code will be Tensor("Const_1:0", shape=(), dtype=int16) Tensor("my_scalar:0", shape=(), dtype=int16) 
 

2. to create a tensor with decimal or string values

  1. import tensorflow as tf  
  2. # Decimal  
  3. r1_decimal = tf.constant(1.12345, tf.float32)  
  4. print(r1_decimal)  
  5. # String  
  6. r1_string = tf.constant("Guru99", tf.string)  
  7. print(r1_string)  
The output of the above code will be Tensor("Const_2:0", shape=(), dtype=float32) Tensor("Const_3:0", shape=(), dtype=string)
 

3. to create a tensor of dimension 1

  1. import tensorflow as tf  
  2.   
  3. r2_boolean = tf.constant([TrueTrueFalse], tf.bool)  
  4. print(r2_boolean)  
  5. ## Rank 2  
  6. r2_matrix = tf.constant([ [12],  
  7.                           [34] ],tf.int16)  
  8. print(r2_matrix)  
The output of the above code will be Tensor("Const_4:0", shape=(3,), dtype=bool) Tensor("Const_5:0", shape=(2, 2), dtype=int16)
 

Tensor Attributes 

 
Given below is a list of commonly used attributes 
 

1. tensorflow.shape

 
It is used for returning the shape of the tensor 
  1. import tensorflow as tf  
  2.   
  3. # Shape of tensor  
  4. m_shape = tf.constant([ [1011],  
  5.                         [1213],  
  6.                         [1415] ]                        
  7.                      )   
  8. m_shape.shape     
The output of the above code will be TensorShape([Dimension(3), Dimension(2)]) 
 

2. tensorflow.zeros

 
It is used for for creating a tensor of the given dimmension with all elements being zero 
  1. import tensorflow as tf  
  2. # Create a vector of 0  
  3. print(tf.zeros(10))  
The output of the above code will be Tensor("zeros:0", shape=(10,), dtype=float32) 
 

3. tensorflow.ones

 
It is used for for creating a tensor of the given dimmension with all elements being one 
  1. import tensorflow as tf  
  2. # Create a vector of 1  
  3. print(tf.ones([1010]))              
  4. # Create a vector of ones with the same number of rows as m_shape  
  5. print(tf.ones(m_shape.shape[0]))  
  6. # Create a vector of ones with the same number of column as m_shape  
  7. print(tf.ones(m_shape.shape[1]))  
  8.   
  9. print(tf.ones(m_shape.shape))     
The output of the above code will be Tensor("ones_1:0", shape=(10, 10), dtype=float32) Tensor("ones_2:0", shape=(3,), dtype=float32) Tensor("ones_3:0", shape=(2,), dtype=float32) Tensor("ones_4:0", shape=(3, 2), dtype=float32) 
 

4. tensorflow.dtype

 
It is used to find the data type of the elements of the tensor 
  1. import tensorflow as tf  
  2. m_shape = tf.constant([ [1011],  
  3.                         [1213],  
  4.                         [1415] ]                        
  5.                      )   
  6. print(m_shape.dtype)  
The output of the above code will be <dtype: 'int32'>
  1. import tensorflow as tf  
  2.   
  3. # Change type of data  
  4. type_float = tf.constant(3.123456789, tf.float32)  
  5. type_int = tf.cast(type_float, dtype=tf.int32)  
  6. print(type_float.dtype)  
  7. print(type_int.dtype)     
The output of the above code will be <dtype: 'float32'> <dtype: 'int32'> 

 
TensorFlow Useful Functions

 
Following are some mathematical functions that are useful for manupilating the tensor 
  • tensorflow.add(a, b)
  • tensorflow.substract(a, b)
  • tensorflow.multiply(a, b)
  • tensorflow.div(a, b)
  • tensorflow.pow(a, b)
  • tensorflow.exp(a)
  • tensorflow.sqrt(a)
  1. import tensorflow as tf  
  2.   
  3. x = tf.constant([2.0], dtype = tf.float32)  
  4. tensor_a = tf.constant([[1,2]], dtype = tf.int32)  
  5. tensor_b = tf.constant([[34]], dtype = tf.int32)  
  6.   
  7. #Sqaure Root  
  8. print(tf.sqrt(x))  
  9. #Exponential  
  10. print(tf.exp(x))  
  11. #Power  
  12. print(tf.pow(x,x))  
  13. #Add  
  14. tensor_add = tf.add(tensor_a, tensor_b)  
  15. print(tensor_add)  
  16. #Substarct  
  17. tensor_sub = tf.subtract(tensor_a, tensor_b)  
  18. print(tensor_sub)  
  19. #Multiply  
  20. tensor_mul = tf.multiply(tensor_a, tensor_b)  
  21. print(tensor_mul)  
  22. #Divide  
  23. tensor_div = tf.div(tensor_a, tensor_b)  
  24. print(tensor_div)  
The above code, demonstartes the use of all the above mentioned TensorFlow function0
 

TensorFlow Variables 

 
To create variables in TensorFlow we use tensorflow.get_variable()
 
Syntax 
 
tf.get_variable(name = "", values, dtype, initializer)
 
argument
      - `name = ""`: Name of the variable
      - `values`: Dimension of the tensor
      - `dtype`: Type of data. Optional
      - `initializer`: How to initialize the tensor. Optional
 
If initializer is specified, there is no need to include the `values` as the shape of `initializer` is used.
  1. import tensorflow as tf  
  2.   
  3. # Create a Variable  
  4. var = tf.get_variable("var", [12])  
  5. print(var)  
  6.   
  7. #following initializes the variable with a initial/default value  
  8. var_init_1 = tf.get_variable("var_init_1", [12], dtype=tf.int32,  initializer=tf.zeros_initializer)  
  9. print(var_init_1)         
  10.   
  11. #Initializes the first value of the tensor equals to tensor_const  
  12. tensor_const = tf.constant([[1020],[3040]])  
  13. var_init_2 = tf.get_variable("var_init_2", dtype=tf.int32,  initializer=tensor_const)  
  14. print(var_init_2)     
The output of the above code will be <tf.Variable 'var:0' shape=(1, 2) dtype=float32_ref> <tf.Variable 'var_init_1:0' shape=(1, 2) dtype=int32_ref> <tf.Variable 'var_init_2:0' shape=(2, 2) dtype=int32_ref> 
 

TensorFlow Placeholder 

 
A placeholder has the purpose of feeding the tensor. Placeholder is used to initialize the data to flow inside the tensors. To supply a placeholder, you need to use the method feed_dict. The placeholder will be fed only within a session.
 
Syntax
 
tf.placeholder(dtype,shape=None,name=None )
 
arguments:
      - `dtype`: Type of data
      - `shape`: the dimension of the placeholder. Optional. By default, the shape of the data
      - `name`: Name of the placeholder. Optional  
  1. import tensorflow as tf  
  2. data_placeholder_a = tf.placeholder(tf.float32, name = "data_placeholder_a")  
  3. print(data_placeholder_a)  
The output of the above code will be Tensor("data_placeholder_a:0", dtype=float32)
 

TensorFlow Session 

 
Following we will demonstrate using a TensorFlow Session 
  1. import tensorflow as tf  
  2.   
  3. ## Create, run  and evaluate a session  
  4. x = tf.constant([2])  
  5. y = tf.constant([4])              
  6.   
  7. ## Create operator  
  8. multiply = tf.multiply(x, y)  
  9.   
  10. ## Create a session to run the code  
  11. sess = tf.Session()  
  12. result_1 = sess.run(multiply)  
  13. print(result_1)  
  14. sess.close()  
The output of the above code will be [8] 
 

Simple Python Tensorflow Program

  1. import numpy as np  
  2. import tensorflow as tf  
In the above code, we are importing numpy and TensorFlow and also renaming them as np and tf respectively.
  1. X_1 = tf.placeholder(tf.float32, name = "X_1")  
  2. X_2 = tf.placeholder(tf.float32, name = "X_2")  
In the above code, we are defining the two variables X_1 and X_2. When we create a placeholder node, we have to pass in the data type will be adding numbers here so we can use a floating-point data type, let's use tf.float32. We also need to give this node a name. This name will show up when we look at the graphical visualizations of our model.
  1. multiply = tf.multiply(X_1, X_2, name = "multiply")  
In the above code, we define the node that does the multiplication operation. In Tensorflow we can do that by creating a tf.multiply node. This node will result in the product of X_1 and X_2
  1. with tf.Session() as session:    
  2.     result = session.run(multiply, feed_dict={X_1:[1,2,3], X_2:[4,5,6]})    
  3.     print(result)     
To execute operations in the graph, we have to create a session. In Tensorflow, it is done by tf.Session(). Now that we have a session we can ask the session to run operations on our computational graph by calling session. To run the computation, we need to use run.
 
When the addition operation runs, it is going to see that it needs to grab the values of the X_1 and X_2 nodes, so we also need to feed in values for X_1 and X_2. We can do that by supplying a parameter called feed_dict. We pass the value 1,2,3 for X_1 and 4,5,6 for X_2.
 
Simple_TensorFlow.py 
  1. import numpy as np  
  2. import tensorflow as tf  
  3.   
  4. X_1 = tf.placeholder(tf.float32, name = "X_1")  
  5. X_2 = tf.placeholder(tf.float32, name = "X_2")  
  6.   
  7. multiply = tf.multiply(X_1, X_2, name = "multiply")  
  8.   
  9. with tf.Session() as session:  
  10.     result = session.run(multiply, feed_dict={X_1:[1,2,3], X_2:[4,5,6]})  
  11.     print(result)  
The Above Consolidated Program will give the following result:  [ 4. 10. 18.] 
 

Methods to Load Data using Python TensorFlow

 
There are two ways to load data, they are as follows:
 

1. Load Data using NumPy Array

 
We can hard-code data into a NumPy Array or can load data from an xls or xlsx or CSV into a Pandas DataFrame which can then be converted into a NumPy Array. If your dataset is not too big, i.e., less than 10 gigabytes, you can use this method. The data can fit into memory.
  1. ## Numpy to pandas    
  2. import numpy as np    
  3. import pandas as pd  
  4.   
  5. h = [[1,2],[3,4]]     
  6. df_h = pd.DataFrame(h)    
  7. print('Data Frame:', df_h)    
  8.     
  9. ## Pandas to numpy    
  10. df_h_n = np.array(df_h)    
  11. print('Numpy array:', df_h_n)    
The output of the above code will be Data Frame: 0 1 0 1 2 1 3 4 Numpy array: [[1 2] [3 4]]
 

2. Load Data using TensorFlow Data Pipeline 

 
Tensorflow has built-in API that helps you to load the data, perform the operation and feed the machine learning algorithm easily. This method works very well especially when you have a large dataset. For instance, image records are known to be enormous and do not fit into memory. The data pipeline manages the memory by itself. This method works best if you have a large dataset. For instance, if you have a dataset of 50 gigabytes, and your computer has only 16 gigabytes of memory then the machine will crash.
 
In this situation, you need to build a Tensorflow pipeline. The pipeline will load the data in batch, or small chunk. Each batch will be pushed to the pipeline and be ready for the training. Building a pipeline is an excellent solution because it allows you to use parallel computing. It means Tensorflow will train the model across multiple CPUs. It fosters the computation and permits for training powerful neural network.
 

Methods to create TensorFlow Data Pipeline

 
1. Create the Data
  1. import numpy as np  
  2. import tensorflow as tf  
  3. x_input = np.random.sample((1,2))  
  4. print(x_input)  
In the above code, we are generating two random numbers using the NumPy's Random Number Generator
 
2. Create the Placeholder
  1. x = tf.placeholder(tf.float32, shape=[1,2], name = 'X')  
We are creating a placeholder using the tf.placeholder()
 
3. Define the Dataset Method
  1. dataset = tf.data.Dataset.from_tensor_slices(x)  
We define the dataset method as tf.data.Dataset.from_tensor_slices()
 
4. Create the Pipeline
  1. iterator = dataset.make_initializable_iterator()   
  2. get_next = iterator.get_next()  
In the above code, we need to initialize the pipeline where the data will flow. We need to create an iterator with make_initializable_iterator. We name it iterator. Then we need to call this iterator to feed the next batch of data, get_next. We name this step get_next. Note that in our example, there is only one batch of data with only two values. 
 
5. Execute the Operation
  1. with tf.Session() as sess:  
  2.     # feed the placeholder with data  
  3.     sess.run(iterator.initializer, feed_dict={ x: x_input })   
  4.     print(sess.run(get_next)) # output [ 0.52374458  0.71968478]  
In the above code, we initiate a session, and we run the operation iterator. We feed the feed_dict with the value generated by numpy. These two value will populate the placeholder x. Then we run get_next to print the result. 
 
TensorFlow_Pipeline.py
  1. import numpy as np  
  2. import tensorflow as tf  
  3. x_input = np.random.sample((1,2))  
  4. print(x_input)  
  5. # using a placeholder  
  6. x = tf.placeholder(tf.float32, shape=[1,2], name = 'X')  
  7. dataset = tf.data.Dataset.from_tensor_slices(x)  
  8. iterator = dataset.make_initializable_iterator()   
  9. get_next = iterator.get_next()  
  10. with tf.Session() as sess:  
  11.     # feed the placeholder with data  
  12.     sess.run(iterator.initializer, feed_dict={ x: x_input })   
  13.     print(sess.run(get_next))   
The output of the above code will be [[0.87908525 0.80727791]] [0.87908524 0.8072779 ]
 

Conclusion 

 
In this chapter, we studied Python Seaborn. In the next chapter, we will start discussing Statistics.
Author
Rohit Gupta
83 22.4k 2.6m
Next » Statistics: Introduction