# Learn About Data Structures And Algorithm (DSA) - Part Two

The topics to be covered are,

• Introduction
• What is List
• Static List

• Implementation of the static list

• Analysis of Dynamic list

• Measuring the time complexity

• Result of analysis
• Best implementation of the list after analysis

You should have aware of the concepts of Data Structure and its purpose, ways of implementing it, its classification. Also, you should have knowledge about time complexity and space complexity, the ways to analyze the algorithm using different asymptotic notations. All the mentioned topics are clearly described in my previous article. Following is the link to my previous article.

Introduction

As you have learned that if we want to implement any data structure then first we have to define its Abstract Data Type (ADT) then we have to provide its implementation. We can say that first, we need to make the mathematical and logical model, then to implement it on the data objects. As we see only the abstract view, we can call this abstract data type ADT.

The data and operations on which the data structure comprises are called Abstract Data Type (ADT). In ADT, we don’t give any implementation of the data objects and their operations. We can say that ADT provides us with Data Abstraction. ADT focuses on “what a data structure does” rather than on “how a data structure does”. In this article, you’ll learn the list as ADT and then its implementation.

What is List?

A list is a common real-world entity and a collection of objects of the same data type. First, we define the list as Abstract Data Type (ADT). So we only concern with the data and operations on the list, not its implementation.

Let's take an example, we want to make a list that can perform the following operations,

• Store a given number of elements of any given data type.
• Read elements by their position in the list.
• Write/Modify element at any position in a list.

All the above features are about data and operations on the list having a specific data type. So this is an abstract data type, because, we only focus on “what the list does” rather than the implementation in terms “how the list does”.

Static List

The list you have seen above is some sort of static list because it is of some fixed size and will not grow automatically when the list is full. Now the question comes what is the implementation of this static list. And we know this is the second concept about that data structure which is its implementation. In other words, you can say, you have to do the concrete implementation of this static list in which you can store elements, read elements and modify elements. Let’s say we implement the above static list ADT as “arrays”.

What is Array?

The array is a sequential-linear data structure which provides us the implementation of the list, so it means it is also a collection of objects of the same data type. Here sequential means, the array is a block of one contiguous memory that starts from index 0 to specific given index size. Let’s implement the array for above ADT static list.

Hence, you have seen the implementation of the ADT as a static list. Because this list is a static list and now we want to make the list that has more features. So we have to redefine the list with new features. Let’s say, we want to make the list that has following functionalities:

• An empty list should have size 0.
• Enable us to insert an element at any position.
• Enable us to remove element form any position.
• Enable us to count the total number of elements in the list.
• Read/Modify the element at any particular position in the list.
• Specify the data-type of the list.
Now, we need a data structure that implements the above ADT. Let’s say we are taking here the array (sequential-linear data structure) data structure. Let’s say we create an array of some large max size that implements the above ADT.

You can see in the following pictures that, first we create the int array A of some maximum size, then we make a variable named as “end” and set it to -1 which is an invalid address showing that the list is empty. Then we insert the value into the list.

Now, we want to insert the value 5 at the specific position in the list let’s say position/index 2, so we have to shift all the values to higher indices so that the block of position/index 2 becomes empty. Then we store the value 5 into the index 2. You can see it in the following picture logically,

Now, we want to remove the element from the list, first the value is deleted then all the values shifted to fulfill the empty space. You can see it in the following pictures logically.

The pictures are completely made by myself and very self-explanatory. So if you still have any problem in understanding then feel free to ask me the question in the comments section.

If you want to count the total number of elements in the list then, you can use “count” function and calculate the total number of elements using “end” variable. See the method below,

Hence, you have seen that how this array implements the previous ADT. As you know we have created this list which is of some large size. Now the question comes what will happen when the list is full while there are still some data to be stored?

So the answer to this question is: “when the implemented array is full, create a new larger array of double the size of the previous array. Then copy the elements of the previous array into the new array.”

So this is the strategy to be followed when the array is filled up. But the problem with this strategy is, this strategy is costly in terms of time and space. Let’s go towards the analysis of this list.

Analysis of list

As we know, the data structure is a way to store and organize data. But we also have to maintain the efficiency of the program/algorithm. In the case of this list, we have to analyze the cost of operations performed on this list.

Measuring the Time Complexity

Let’s analyze the cost of time for operations performed on the previously created array. You’ll see the analysis in terms of following cases.

• Accessing the element
• Inserting the element
• Removing the element

Accessing the element

If we want to access the element of specific position/index then it takes some constant time because it is an array. So its time complexity in terms of Big-Oh notation is O(1).

Inserting the element

If we want to insert an element at the end of the array, then this insertion takes also constant time. So time complexity in terms of Big-Oh notation will be O(1). This is the case when the array is not full...

But if you want to insert the element at a particular position in the array then we have to make shifts as you have seen in the above pictures. The worst case will be, if you want to insert an element in the start of the array means at the index 0, so we have to shift all the elements towards higher indices. So cost in terms of time is proportional to the length of the array having n size T n so time complexity in terms of Big-Oh notation will become O(n) which shows linear time complexity.

The third case is, what happens when you are inserting an element in the array and the array is already full? So we know we have to make a new array of double the size of the previous array and copy the elements from the previous array into the new array. So the procedure of creating a new array and copying all the elements is costly in terms of time. So for this strategy, the cost in terms of time is proportional to the length of the array having n size T n so time complexity in terms of Big-Oh notation will become O(n) which shows linear time complexity.

Removing the element

If we want to remove an element at the end of the array, then this removing of the element takes constant time. So time complexity in terms of Big-Oh notation will be O(1).

But if you want to remove an element at a particular position in the array then we have to make shifts as you have seen in the above pictures. The worst case will be, if you want to remove an element in the start of the array means at the index 0, so we have to shift all the elements towards lower indices. So cost in terms of time is proportional to the length of the array having n size T  n so time complexity in terms of Big-Oh notation will become O(n) which shows linear time complexity.

Result of analysis

For understanding the result I have created a table for showing the costs of time for each operation for above described ADT.

You can deduct the result from the above table like the following,

• We can access any element in constant time. So it is a good thing.
• Insertion and deletion of the element are costly.
• Addition of element when the array is full, is also time costly.

The problem as space complexity

We also have one problem here about this array implementation, it makes our memory useless. Because when want to insert the element in the array, and we encountered with the filled array, then we create a new array of double the size of the previous array. But we didn’t fill the whole array and all the rest of memory is wasted. So it is costly in terms of space complexity.

Best implementation

As we analyze this array implementation and we see that it is costly in terms of both memory and time. So now we need a data structure who implements our previous ADT efficiently. We need to have a data structure who grows its size automatically when the list is full. So we can say that we want to create a dynamic list. So all these features can be implemented by a data structure named “Linked List”. Linked List is a data structure who implements our ADT and behave efficiently in terms of memory and time. You’ll learn the introduction and implementation of Linked List in coming articles. So stay tuned!

Summary

In this article, you have learned what the data structure is and its implementation including ADT. You also learned the implementation of different ADTs. Then you have learned the analysis of different operations run on the array in terms of time and space complexity. Then you have seen the best implementation of described ADT. So stay tuned for coming articles.

Conclusion