Getting to know Intel DevCloud

Introduction

 
In this article, we will learn about the basic approach to developing an AI edge system and also about the Intel DevCloud.
 

Basic Approach to Developing an AI Edge System

 
Analyze
 
You (as the IoT engineer) collaborate with the end-user at this point in order to consider what they want to do and to involve all specifications.
 
Design
 
The overall design of the device is developed during this process, including the multiple modules and the movement of data from one to the next. You would certainly also as an engineer have a strong theory as to the most fitting hardware approach.
 
Develop 
 
You will create a sample framework via the OpenVINO Toolkit during this process. The AI models that you feel may be suitable to execute the inference are chosen. You also determine which forms of hardware are possible applicants.
 
Test 
 
You will then be testing the Intel DevCloud application.
 
Deploy 
 
The last move is to enforce the client's framework. Despite all the configuration and experiments you have made, progress depending on the actual use would still be necessary.
 
lifecycle 
   
Let me give you an example, to help you understand what and how the lifecycle is implemented. Let us consider making a queuing system where the aim is to alert if the number of people in a queue exceeds a certain limit.
 
Problem Statement 
 
Provided that Walmart wants this device to be configured for 100 outlets, all the information should be transmitted to the central server in real-time. Walmart is not able to pay over $100 for the machine development per outlet. There is an i7 CPU on every outlet. The average daily number of customers is 100 on weekdays and 150 on weekends. They have an avenue of 10 cash counters which can accommodate up to 10 people at a time. Yet they like the queue at a limit of 4 people at a time due to the corona virus. They also mounted PTZ cameras. Therefore, we must build a device that will warn people if the number reaches 4.
 
Solution 
 
To start we analyze the whole application, we understand the requirement i.e, we finalize the following things
  1. Maximum number of people that can be in a queue
  2. Number of people that are on average in the queue
  3. The path a person can follow to reach the queue
  4. The path a person follows to get past the queue
  5. Possible locations where we can fix the cameras or the location at which cameras are already there
  6. Specification of the current hardware like processor, camera, etc
  7. Whether the customer is ready to invest in buying new hardware
As per SDLC, defining the project cost is done in the analysis phase only. So I am not adding that here as this is mainly done by the management or salespeople. Generally, IoT engineers do not indulge in this.
 
Now comes the design phase, where we will draft our plan in order to start with development. In this case, one of several designed plans could be that we start planning and understanding the camera locations with respect to the path that the person will follow to reach the counter.
  1. As PTZ cameras come in 2 specifications; i.e. 720fps and 1080 fps, so we will design our system for 1080 fps. 1080 fps means that we will get 0.9 mill seconds to process each frame. 
  2. The clock speed of i7 is 3.5 GHz on average, which means it can process a frame in less than 0.1 milliseconds, hence the client need not spend on hardware. 
  3. Each outlet will have its own server which will relay all the data to the central server.
  4. We will relay data after each 15 min, so to reduce power overhead, so we will be needing some local memory at each outlet. Hence a hard disk can be used.
So we are done with hardware. That is a major part of designing which  is done. Now the only work is to design the DFD, flow chart, etc.
 
After that, we start with the development phase, where the developers set up their system and create a replica of the deployment location so that once the application is done and running, alpha and penetration testing can be done.
 
Once the alpha and penetration testing is done we test other systems and check how much load a system can take. This whole process is the responsibility of a tester, hence I am not going so deep into this if you want you can ask you system tester or go online and learn about it.
 
Once all system tests and checks are done, the system is ready to be deployed, so you send the deployment team consisting of a representative from your team, and the technician who will deploy the application.
 
Now as per the SDLC, the next stage is maintenance, but here in an AI pipeline we generally eliminate that and replace it with the analysis phase, as normally a client would request us to add some functionality which would need the starting of a new AI pipeline.
 

Intel DevCloud 

 
It is a cloud environment that allows you, on different hardware devices, to create, prototype, and monitor the performance of your application It enables you to systematically build and analyze AI workloads for Intel hardware machine vision.
 
Intel DevCloud consists of,
  1. Development nodes with which you interact to develop code and submit computer jobs.
  2. Edge nodes that contain inference devices on which you can run and test edge workloads.
  3. Storage servers that provide a network-stored filesystem such that all of your data is accessible on the same path from any machine in the cloud.
  4. Queue server with which you interact to submit computer jobs to edge nodes.
  5. UI software that allows you to access the Intel DevCloud resources from a web browser.
So far, we studied about some theory, let us now see how we can implement and use Intel DevCloud. To show that I will use some commands that I used during my project, so many times you may feel that things are not related, so don't worry they are there just as a demo, although I will try to explain each command.
 

Setting up an environment

 
It's very important for any application to set up its environment so that the application knows the path of what it may need.
  1. %env PATH=/opt/conda/bin:/opt/spark-2.4.3-bin-hadoop2.7/bin:/opt/conda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/intel_devcloud_support  
  2. import os  
  3. import sys  
  4. sys.path.insert(0, os.path.abspath('/opt/intel_devcloud_support'))  
  5. sys.path.insert(0, os.path.abspath('/opt/intel'))   
In the above code, %env tells the path where we will find the required repositories. Then we import 'os' and 'sys', so that we can provide the path of the required directories.
 

To create a Python script using Jupyter Notebook 

 
On the server, if we have to run our logic we have to provide our logic as a python script, you can directly interact with the server, here I am using Jupyter notebook to interact with the Intel DevCloud.
  1. %writefile [filename].py  
  2.   
  3. ---------------------------------  
  4. |                               |  
  5. |                               |  
  6. |       %File content%          |  
  7. |                               |  
  8. |                               |  
  9. ---------------------------------

To create a bash file using Jupyter Notebook 

 
To execute the Python script on the cloud, we employ bash script which contains the designated format with all the parameters needed to execute.
  1. %writefile [filename].sh  
  2.   
  3. #!/bin/bash  
  4.   
  5. --------------------------------  
  6. |                               |  
  7. |                               |  
  8. |       %File content%          |  
  9. |                               |  
  10. |                               |  
  11. ---------------------------------  

Submitting a job request to Intel DevCloud

 
So far we created our Python and bash script, now it's time to communicate with the cloud. So to communicate with Intel DevCloud, we used '!qsub', it is a command provided by the Python API to submit jobs to Intel DevCloud.
 
The !qsub command takes a few command-line arguments,
  1. The first argument is the shell script filename - load_model_job.sh. This should always be the first argument.
  2. The -d flag designates the directory where we want to run our job. We'll be running it in the current directory as denoted by '.(dot)'.
  3. The -l flag designates the node and quantity we want to request. The default quantity is 1, so the 1 after nodes is optional.
    For Example: -l nodes=1:tank-870:i5-6500te
  4. The -F flag lets us pass in a string with all command-line arguments we want to pass to our Python script.
For example,
  1. job_id_core = !qsub load_model_job.sh -d . -l nodes=1:tank-870:i5-6500te -F "/data/models/intel/vehicle-license-plate-detection-barrier-0106/FP32/vehicle-license-plate-detection-barrier-0106"  
  2. print(job_id_core[0])  

LiveStat 

 
We can see our job's live status via the liveQStat feature. The cell and work status are locking 10 times while this feature is running. The cell is locked until the poll ends 10 times or the kernel can be disrupted by clicking the Jupyter Notebook Pause button.
  • Q status means our job is currently awaiting an available node
  • R status means our job is currently running on the requested node
Example
  1. import liveQStat  
  2. liveQStat.liveQStat()  
The above will show you the progress that is happening on your request.
 

Conclusion

 
In this article, we learned about the basic approach that we follow while building an AI application, and then we learned about Intel DevCloud and how we can communicate with Intel DevCloud to get our edge application running.


Similar Articles