Web Scraping Made Easy with Python and BeautifulSoup Guide

In today’s data-driven world, the internet is an invaluable treasure trove of information. From news articles to product prices and financial data, the web is brimming with valuable data that can be harnessed for various purposes. However, accessing and collecting this data can be a daunting task if done manually. This is where web scraping comes into play.

What is web scraping?

Web scraping is the process of extracting data from websites. It involves programmatically accessing web pages, retrieving their content, and then parsing and extracting useful information from that content. Web scraping allows you to automate the collection of data from the internet, making it easier to gather large amounts of information quickly and efficiently.

Here are some key points to understand about web scraping:

Accessing Web Pages:

Web scraping starts with sending HTTP requests to web pages. This is done using programming languages like Python, along with libraries like requests, to retrieve the HTML content of a webpage.

Parsing HTML:

Once you have the HTML content, you use tools like HTML parsers (e.g., BeautifulSoup in Python) to parse and navigate the HTML structure of the page. HTML parsing helps you identify the data you want to extract.

Data Extraction:

After parsing the HTML, you can extract specific pieces of data, such as text, images, links, or structured information like tables. This is typically done by selecting HTML elements using CSS selectors or XPath expressions.


Web scraping can be automated, allowing you to scrape multiple pages or websites in a systematic and consistent manner. This is especially valuable when dealing with large amounts of data.


Web scraping has numerous applications across various fields. Businesses use it to gather competitive intelligence, price monitoring, and market research. Researchers use it to collect data for analysis. It’s also used for news aggregation, job scraping, and more.

Ethical Considerations:

While web scraping can be a powerful tool, there are ethical considerations to keep in mind. Scraping can put a load on a website’s server and potentially violate the site’s terms of service. It’s important to scrape responsibly and respect website policies.

Legal Considerations:

The legality of web scraping varies by jurisdiction and the nature of the data being scraped. Some websites explicitly prohibit scraping in their terms of service. It’s crucial to understand and comply with applicable laws and site policies.

Overall, web scraping is a valuable technique for collecting data from the internet, but it should be approached with care and responsibility to ensure ethical and legal compliance.




why beautiful soup for web scraping

Simple and Easy:

Imagine you want to collect information from a messy pile of papers. Beautiful Soup is like a magical tool that helps you sort through those papers effortlessly. It’s easy to learn and use, even if you’re not a computer expert.

Works with Messy Data:

Sometimes, web pages have messy, jumbled-up information. Beautiful Soup is like a detective that can make sense of the mess and find the valuable clues hidden within.

Helps You Find What You Want:

Imagine you’re searching for a specific word in a book. Beautiful Soup can help you find that word quickly, even in a big, complicated book (web page). It’s like a super-fast search tool.

No Extra Costs:

Beautiful Soup is like a free tool that anyone can use. You don’t have to pay anything to use it, and it’s always there to help.

Friendly Community:

Think of Beautiful Soup as a friendly club where lots of people help each other. If you ever have questions or get stuck, you can find lots of people online who are ready to assist you.

Works Everywhere:

It’s like a tool that works on any computer, whether you have a Mac, a Windows PC, or something else. It doesn’t care where you use it; it’s always ready to help.

So, Beautiful Soup is like your trusty sidekick when you want to grab information from websites. It’s easy to use, can handle messy data, and is free for everyone. Whether you’re a beginner or an expert, it’s a fantastic tool for web scraping.

import requests
from bs4 import BeautifulSoup

# Define the URL of the Wikipedia page
url = "https://en.wikipedia.org/wiki/List_of_countries_by_GDP_(nominal)_per_capita"

# Send an HTTP GET request to the URL
response = requests.get(url)

# Check if the request was successful
if response.status_code == 200:
    # Parse the HTML content of the page
    soup = BeautifulSoup(response.text, "html.parser")

    # Find the table containing the GDP per capita data (you may need to inspect the page to get the right table)
    table = soup.find("table", {"class": "wikitable"})

    # Initialize lists to store the data
    countries = []
    gdps_per_capita = []

    # Iterate through the rows of the table
    for row in table.find_all("tr")[1:]:
        columns = row.find_all("td")
        if len(columns) >= 2:
            country = columns[1].text.strip()
            gdp_per_capita = columns[2].text.strip()

    # Print the data (you can process or save it as needed)
    for country, gdp in zip(countries, gdps_per_capita):
        print(f"Country: {country}, GDP per Capita: {gdp}")
    print(f"Failed to retrieve the webpage. Status code: {response.status_code}")

Command Lines

import requests from bs4 import BeautifulSoup

Importing Libraries:

Imagine you’re making a sandwich, and you need two special tools: a knife and a magical ingredient finder. In these lines of code, we’re telling the computer to bring in those tools.

import requests:

This is like saying, “Hey computer, go get that magical ingredient finder tool called ‘requests’ for us.” It helps us ask websites for information.

from bs4 import BeautifulSoup:

This is like saying, “And also, bring us that knife called ‘BeautifulSoup’ from the ‘bs4’ toolkit.” This knife helps us cut and slice the information we get from websites.

So, these lines of code are like getting our essential tools ready to make a sandwich (or in this case, to scrape information from a website). Once we have these tools, we can start cooking up some code to fetch data from the web and slice it up the way we want.


Hey, I want to visit a specific webpage on the internet.” You’re giving the computer the web address (URL) of the page you want to look at.

response = requests.get(url)

This line of code is like telling your computer to go to the webpage whose address you stored in the “url” variable (in this case, a Wikipedia page).

Imagine it’s like sending a messenger to a library to get a specific book for you. The messenger (in this case, “requests.get”) goes to the library, finds the book (the webpage at the given URL), and brings it back.

So, “response” is like the messenger returning with the book (the webpage’s content). Now, your computer has the information from that webpage, and you can work with it, like reading or extracting data from it.




if response.status_code == 200:

This line of code checks whether the messenger (which we sent to get a webpage) was successful in its mission.

In simple terms, if the messenger comes back with a smiley face (status code 200), it means everything went well. The webpage was found, and we can use it. It’s like saying, “Great, the book we wanted is here, and we can read it!”

But if the messenger comes back with a frowny face (a different status code), it means there might be a problem. Maybe the webpage isn’t there, or something else went wrong. In that case, we need to figure out what went awry and try again.

soup = BeautifulSoup(response.text, "html.parser")

This line of code is like turning the messy book (webpage) that the messenger brought back into a clean and organized story that we can easily read.

Imagine the webpage as a jumbled-up pile of papers with text and pictures all over the place. “soup” is like a magic tool called BeautifulSoup, and we’re using it to magically arrange those pages into a neat, readable book.

So, after this line of code, “soup” is our tidy, organized book (or in computer terms, it’s a structured representation of the webpage’s content). Now, we can easily find and extract the information we want from it.

table = soup.find("table", {"class": "wikitable"})

This line of code is like telling our organized book (the “soup” we made earlier) to find a specific table inside it.

Imagine our organized book is like a big encyclopedia with different sections. We’re asking it to find a particular chapter, and we’re describing that chapter as a “table” with a specific name, which is “wikitable.”

So, when this line runs, if our book contains a chapter that matches the description (a table with the name “wikitable”), it will be pulled out for us to work with. This allows us to focus on just the information we need from that specific part of the book.

countries = [] gdps_per_capita = []

These lines of code are like setting up two empty containers to collect specific pieces of information.

countries = []: Think of this as an empty box labeled “countries.” We’re going to use this box to collect the names of different countries.

gdps_per_capita = []: This is another empty box, but it’s labeled “gdps_per_capita.” We’ll use this box to collect the GDP per capita values, which tell us how wealthy each country is on average.

for row in table.find_all(“tr”)[1:]::

 This line is like saying, “Let’s start reading through the rows of the table, but skip the first row because it often contains headers or titles.”

columns = row.find_all(“td”): 

It’s as if we’re looking at each row and finding all the columns in it. In a table, each cell is like a separate column, and this line helps us find them.

if len(columns) >= 2::

 Here, we’re checking if there are at least two columns in the row. This helps us make sure we’re looking at a row with enough data. If there are less than two columns, we skip it.

country = columns[1].text.strip(): 

If we have enough columns, we’re looking at the second column (index 1) and getting the text inside it. This text is the name of a country. We clean it up by removing any extra spaces or special characters (using .strip()).

gdp_per_capita = columns[2].text.strip():

 Similarly, we’re now looking at the third column (index 2) and getting the text inside it. This text represents the GDP per capita of the country. We also clean it up.


We’re putting the country name we found into our “countries” box (list).


We’re putting the GDP per capita value we found into our “gdps_per_capita” box (list).

So, these lines of code are like reading through a table, checking each row, and collecting the names of countries and their GDP per capita values into our containers for later use.



for country, gdp in zip(countries, gdps_per_capita):: This line is like opening our containers (lists) of countries and GDP per capita values. It’s saying, “Let’s go through both boxes at the same time.”

print(f”Country: {country}, GDP per Capita: {gdp}”): For each pair of country and GDP in our boxes, we’re printing out a message. It’s like saying, “Here’s the name of a country, and here’s its GDP per capita.” This line prints that information in a clear and neat way.

else:: If something went wrong and we didn’t get data from the webpage (for example, if the webpage couldn’t be accessed), this part kicks in.

print(f”Failed to retrieve the webpage. Status code: {response.status_code}”): In this case, we’re printing a message that says, “Oops, we couldn’t get the data from the webpage, and here’s the reason why (the status code).”

So, this code is all about presenting the collected data nicely and providing a message in case something didn’t work as expected, like when the webpage couldn’t be reached.

Scroll to Top