Web Scraping Using Python

Introduction

 
Data are the most valuable asset for any organization. It helps them to learn about operational activities, also the need of the market, and the data of competitors on the internet which helps them plan out future perspectives. We are going to learn one of the most demanding concepts on the Internet, that guides many institutions to transform their business to the next level. This is how to collect data from the webpage/website which is known as "Web Scraping" using one of the most trending programming languages, Python.
 
Definition
  • The Process of extracting HTML data from a webpage/website.
  • Transforming HTML unstructured data to structure data into excel or dataset.
  • Let's study this concept with an example of extracting the name of weblinks available on the home page of www.c-sharpcorner.com website.
Step 1
 
To start with web scraping, we need two libraries: BeautifulSoup in bs4 and request in urllib. Import both of these Python packages.
  1. #import packages(libraries)  
  2. from bs4 import BeautifulSoup  
  3. import urllib.request   
Step 2
 
Select the URL to extract its HTML elements.
  1. #target URL   
  2. url = "https://www.c-sharpcorner.com"  
Step 3
 
We could access the content on this webpage and save the HTML in “myUrl” by using urlopen() function in the request.
  1. #use request to open URL  
  2. myUrl = urllib.request.urlopen(url)  
Step 4
 
Create an object of BeautifulSoup to further extract the webpage element data, using its various inbuilt functions.
  1. # soup is an object of BeautifulSoup that will allow us to used all its inbuild functions to extract webpage element data  
  2. soup=BeautifulSoup(myUrl, 'html.parser')  
  3.   
  4.   
  5. # title of the page  
  6. print(soup.title)  
  7.   
  8. # get attributes:  
  9. print(soup.title.name)  
  10.   
  11. # get values:  
  12. print(soup.title.string)  
  13.   
  14. # beginning navigation:  
  15. print(soup.title.parent.name)  
  16.   
  17. # getting specific values:  
  18. print(soup.p)  
  19.   
  20. # prettify() function in BeautifulSoup will enable us to view how the tags are nested in the document  
  21. print(soup.prettify())  
Step 5
 
Locate and scrape the services. Using the soup.find_all() function, extract the specific HTML element tag from the entire or specific portion of the webpage.

We should find the HTML services on this web page, extract them, and store them. For each element in the web page, they always have a unique HTML "ID" or "class". To check their ID or class, we would need to INSPECT element on the webpage.
  1. # soup.find_all('div') function will extract all the div tags on the given url  
  2. div_list= soup.find_all('div')  
  3.   
  4. # this will be all the div tag as a single element of review[] list  
  5. print(div_list)  
Step 6
 
On inspecting the web page for extracting all the services names on the www.c-sharpcorner.com website, we located the ul tag with the class value as 'headerMenu' as the parent node.
 
To extract all the child node which is our target to extract all the weblink names on the www.c-sharpcorner.com website, we located the li tag as the target node.
  1. # weblinks[] a list to store all the weblinks name on https://www.c-sharpcorner.com  
  2. weblinks=[]  
  3.   
  4. # the outermost loop will help to extract all HTML element of div tag with class value as 'row service-ro no-margin'  
  5. for i in soup.find_all('ul',{'class':'headerMenu'}):  
  6.     # the innnermost loop will help to extracdt all the HTML element of div tag with class value as col-lg-4 col-md-6 single-servic  
  7.     for j in i.find_all('li'):  
  8.         # to extract h4 HTML element in each j  
  9.         per_service = j.find('a')  
  10.         # this will print the service name   
  11.         print(per_service.get_text())  
  12.         # to append the service name in services list  
  13.         weblinks.append(per_service.get_text())  
Output of the above code:
 
TECHNOLOGIES
ANSWERS
LEARN
NEWS
BLOGS
VIDEOS
INTERVIEW
PREP
BOOKS
EVENTS
CAREER
MEMBERS
JOBS
 

Summary

 
This article taught the basics of how to extract HTML element data from any given URL.
 
Download the source code.


Similar Articles