Post

# Learn About Data Structures And Algorithm (DSA) - Part Four

• 15.6k
• 0
• 7

## Introduction

The topics to be covered are,

• Introduction and background
• Cost of accessing an element
• Memory requirements
• Cost of inserting an element
• Cost of deleting an element
• Feasibility in use

### Prerequisites

You should be aware of the concepts of Data Structure and its purpose, ways of implementing it, and its classification. Also, you should have the knowledge of time complexity and space complexity, and the ways to analyze the algorithm using different asymptotic notations. Also, you should know the basic concepts, like arrays and linked lists. All the mentioned topics are clearly described in my previous articles. Following are the links to my previous articles.

## Background

In previous articles, we have seen that how the arrays and linked lists implement our ADTs. In this article, we’ll learn the difference between both the data structures array and linked list. Remember, there is not any pure difference or comparison between all the data structures. It is completely dependent on the scenarios of the applications that decide where to use that which data structure. We’ll see the behavior of both the data structures - arrays and linked lists - in terms of the following aspects.

• Cost of accessing an element
• Memory requirements
• Cost of inserting an element
• Cost of deleting an element
• Feasibility in use

## Cost of accessing an element

Here, we’ll assess the cost of accessing an element in both, arrays and liked lists. First of all, look at the arrays.

## Arrays

Irrespective of the size of the array, the average case of accessing the element is constant. We only need the base address of the array and after some calculation, we can access the element. You have seen this procedure with calculations in my previous articles. If you still have any problem in understanding, then feel free to ask.

So in arrays, we only need to know the base address and do some calculations. So we can say that the time complexity for accessing the array’s element is O(1).

As we know, a linked list is not a block of contiguous memory, and data is stored at multiple blocks at multiple addresses. The following can be the logical view of a linked list.

Each block has two parts, a data part to store the data and the other part to store the address for the other block. Each block is called Node. If we want to access any element in the linked list we only need to know the address of the head node. There can be three cases for accessing the element in the linked list.

### Worst Case

If we need to access the last element in the linked list, we have to traverse the whole list. So the time taken is directly proportional to the size of list n.

### Average Case

If we need to access the middle element of the list, then it will be the average case. But still, we have to traverse half the list. So, the time is directly proportional to the size of list n.

### Best Case

If we need to access the first element called the head node, then it will be the best case because we don’t need to traverse the whole or half list. So it takes less time than other cases.

So we can say that the time complexity for accessing the element of the linked list is O(n).

### Result

Hence if your application is all about accessing the elements in the collection, the array is not a good choice because it is heavily time consumed. So the list is a better choice than an array.

### Memory requirements

In this section, we need to assess the space complexity of the array and linked list. This means we need to assess the memory consumed by the array and linked list. Let’s analyze the space complexity for arrays and linked lists. All the points have correspondence with each other.

## For array

The array has a fixed size and we only need to tell the actual size of the array before creating it.

In the above array, you can see the array is of size 6. And you can see only the first three blocks of memory are filled with data and the rest of the memory is unused. So this is the bad point about the arrays that they can make memory useless.

The memory in bytes for the above array becomes.

As you can see the int datatype is not complex. 24 bytes of memory is occupied by the array A.

• But if we use any complex data type instead of int, float, double, etc, it can occupy any size of memory. Let’s say in our scenario it is taking 16 bytes. Now.
• In arrays, at the time of allocation, the memory may be available as one large block.
• In arrays, when the array is filled then we have to make a new array of size bigger than the previous array, then we have to copy previous elements into the new array. So, all this procedure is not good enough for performance.

• A linked list is not of fixed size. It takes memory as per a separate request for the creation of a single node. The logical view of the linked list is as follows,
• Like arrays, there is no problem with unused memory in the linked list because memory can only be consumed or allocated on the separate request of creating the node.
• As we know, a node consists of two parts, a data part and a pointer part. Both parts take memory separately. It means, that in a linked list, there is extra memory consumption for pointer variables. The memory in bytes for the above-mentioned linked list is as follows.

You can see each part of the block takes 4 bytes and hence the result becomes 32 bytes, as follows.

As you can see, the list takes less memory (32 bytes) than an array (36 bytes). However, the scenario can be different from the use of the different data types.

If we use any complex data type instead of int, float, double, etc. then it can take any size. Let’s say in our scenario it is taking 16 bytes. Now,

Now you can see, the linked list takes less memory than an array. So it all depends on our application scenario whether we should want to use an array or list

• In a linked list, at the time of allocation, the memory may be available as small.
• There is no performance problem with making a new list like a new array, then copying the previous data. Because the linked list never fills and only creates as per a single request of the node

### Cost of inserting an element

In this section, you’ll learn the cost of inserting an element at different positions in both the arrays and linked list. Let’s go towards the cost of inserting an element at different positions.

## For arrays

There can be three cases while inserting an element in arrays, such as given below.

• At the beginning
• At the end (if the array is not filled)
• At the end (if the array is filled)
• At the specific position

Here we’ll learn all the cases with their time complexity.

## For Array

### At the beginning

In arrays, if we need to insert an element at the beginning, then we have to shift all the elements towards the right and make empty space in the first position, then store the new element there.

So the time taken by the array is proportional to the size of the number of elements in it. So, the time complexity in terms of Big-Oh notation is O(n).

### At the end (if the array is not full)

In arrays, if we need to insert an element at the end of the array, we just need to know the last index and then store the value there. So it takes constant time to insert the value to the end of the array.

So time complexity in terms of Big-Oh notation is O(1) because it takes constant time.

### At the end (if the array is full)

We have to create a new array, and then copy all the previous elements into the new array. Then we can insert an element at the end. So time taken by the array is proportional to the size of the array n.

So time complexity in terms of Big-Oh notation is O(n).

### At the specific ith position

If we need to insert an element at some specific ith position, let’s say, at the middle of the array, then we also need to make shifts by n/2. So time taken by the array for inserting an element at ith the position is proportional to the size of array n hence time complexity in terms of Big-Oh notation is O(n).

There can be three cases while inserting an element in the linked list, such as given below:

• At the beginning of the list
• At the end of the list
• At the specific position

Here we’ll learn all the cases with their time complexity.

### At the beginning

In a linked list, if we want to insert an element at the beginning of the linked list, then it is only meant to create a new node at the start of the list and then adjust the pointer links. So it’ll not depend on the size of the list.

So time taken by the linked list is constant. So time complexity in terms of Big-Oh notation is O(1).

### At the end

In a linked list, if we want to insert an element at the end of the linked list then we just need to create a new node and then adjust the links. So for inserting the element at the end, we have to traverse the whole list containing the n elements.

So time taken by the list is proportional to the size of n. So time complexity in terms of Big-Oh notation is O(n).

### At the specific ith position

In a linked list, to insert an element at some specific ith position, we don’t have to make shifts like arrays, but we still need to traverse the list till that ith position is reached. So, it also depends on the size of the list n.

So time taken by the list to insert an element at some specific position is proportional to the size of list n. So time complexity in terms of Big-Oh notation is O(n).

### Cost of deleting an element

In this section, you’ll learn the cost of deleting an element at different positions in both the arrays and linked list. Let’s analyze the cost for both the arrays and the linked list.

## For Array

Look, the cost of deleting an element in the array is the same as the cost of inserting an element into it. The reason behind the time complexity is also the same as the reason for inserting the element. Hence,

• At the beginning: Time complexity is O(n) because we’ve to make shifts in the array to delete the element from the beginning.
• In the end: Time complexity is O(1) because we can reach an exact location (last index) through calculation.
• At the ith position: Time complexity is O(n) because we’ve to traverse the array till the ith position.

The cost of deleting an element in the linked list is the same as the cost of inserting an element into it. The reason behind the time complexity is also the same as the reason for inserting the element. Hence,

• At the beginning: Time complexity is O(1) because we just need to delete the first So it takes constant time.
• At the end: Time complexity is O(n) because we have to traverse the whole list till the last element is reached.
• At the ith position: Time complexity is O(n) because we have to traverse the list until the ith position is reached

## Feasibility in use

### For Array

Easy to use for its implementation.