ASM 2 ALgorithm and Data Structure FPT GREENWICH BTECH DISTINCTION (SUPER HOT SALE)

35 76 0
ASM 2 ALgorithm and Data Structure FPT GREENWICH BTECH DISTINCTION (SUPER HOT SALE)

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

Điểm của bài asm còn tùy thuộc vào người chấm. Chỉ cần paraphase bài này là có thể đạt merit hoặc có thể đạt distinction tùy vào thầy dạy. 1 trong nhưng tool paraphase mình recommend là quillbot.ASSIGNMENT 1 FRONT SHEET Qualification BTEC Level 5 HND Diploma in Computing Unit number and title Unit 20 Advanced Programming Submission date Date Received 1st submission Re submission Date Date Rec.

ASSIGNMENT FRONT SHEET Qualification BTEC Level HND Diploma in Computing Unit number and title Unit 19: Data Structures and Algorithms Submission date 12/03/2022 Date Received 1st submission Re-submission Date Date Received 2nd submission Student Name Student ID Class Assessor name Quan-Hong Do Student declaration I certify that the assignment submission is entirely my own work and I fully understand the consequences of plagiarism I understand that making a false declaration is a form of malpractice Student’s signature Grading grid P4 P5 P6 P7 M4 M5 D3 D4 Page of 35  Summative Feedback: Grade:  Resubmission Feedback: Assessor Signature: Date: Internal Verifier’s Comments: IV Signature: Page of 35 Table of Contents A INTRODUCTION B IMPLEMENT COMPLEX DATA STRUCTURES AND ALGORITHMS C Singly vs Doubly Linked-List 1.1 Highlight some differences between Singly and Doubly Linked-List 1.2 Describe Singly and Doubly Linked-List’s operations 1.3 Implement Singly and Doubly Linked-List Insert An Element In The Middle Of A Linked-List 16 2.1 Implementation 16 2.2 Explanation of the implementation 16 Sorting in Linked-List 18 3.1 Describe your selected sorting algorithm 18 3.2 Implementation 19 3.3 Explanation Of The Implementation 19 IMPLEMENT ERROR HANDLING AND REPORT TEST RESULTS 22 Testing Plan 22 Evaluation 25 2.1 Overall: 25 2.2 Explain and Solution 25 D DISCUSS HOW ASYMPTOTIC ANALYSIS CAN BE USED TO ASSESS THE EFFECTIVENESS OF AN ALGORITHM 26 Asymptotic Analysis: 26 Asymptotic Notations And How They Relate To Ideas Of Best, Average And Worst Case: 26 Some Examples to Clarify O(1), O(n), O(N log N) 28 E DETERMINE TWO WAYS IN WHICH THE EFFICIENCY OF AN ALGORITHM CAN BE MEASURED, ILLUSTRATING YOUR ANSWER WITH AN EXAMPLE 31 F TRADE-OFF WHEN SPECIFYING AN ADT 33 G CONCLUSION 34 References 34 Page of 35 List of Figures Figure 1: Node structure of singly linked list Figure 2: instance variables and constructor of linked list Figure 3: addLast Operation and addFirst Operation of singly Figure 4: singly Insert operation 10 Figure 5: Singly Search Operation: 10 Figure 6:getFirst and getLast operation 10 Figure 7: Size and isEmpty Operation of singly list 11 Figure 8: Singly RemoveFirst and RemoveLast Operation: 11 Figure 9: Node class of doubly linked list 12 Figure 10: Doubly linked list class 12 Figure 11: size and isEmpty operation of doubly 13 Figure 12: addFirst and addLast operations of doubly 13 Figure 13: search operation of doubly 14 Figure 14: Insert Doubly 14 Figure 15: GetFirst and Last doubly 15 Figure 16: Remove First and Last Operation doubly 15 Figure 17: InsertMiddle Doubly Linked List 16 Figure 18: Searching the previous node 17 Figure 19: adjust pointer of new node 18 Figure 20: Complete Inserting process 18 Figure 21: Sorting algorithm Implement 19 Page of 35 A INTRODUCTION In this assignment, there are some tasks I need to complete such as implementing a complex algorithm and data structure(singly and doubly linked list) that consists the valid operations Following that, I will implement error handling and report test results Additionally, I will discuss how asymptotic analysis can be used to assess the effectiveness of an algorithm To more specific, taking an example to clarify some concepts of big-O notation Determining two ways in which the efficiency of an algorithm can be measured will be discussed and illustrating my answer with an example In the end, discussing about space-time trade-off when specifying an adt Page of 35 B IMPLEMENT COMPLEX DATA STRUCTURES AND ALGORITHMS Singly vs Doubly Linked-List 1.1 Highlight some differences between Singly and Doubly Linked-List Basically, both singly and doubly linked list could be understood as the characteristic variation of linked list Using Singly or Doubly linked list based on the purpose to achieve and system limitations Singly linked list is commonly viewed as the simple implementation of linked list ADT and doubly linked list is the more complex one Moreover, most of the strengths of this one also are the weakness of the other and vice versa In other words, they are put in the opposite assessment For more specific, here are the comparison table of these two: Singly Linked List Doubly Linked List Concept Singly Linked list or the basic linked list Doubly linked list also stores data in a which stores each data in a node and node but each node of the doubly each node of singly linked list only has linked list has two pointers One point one pointer point to the next node to the previous node and one point to the next node Node structure As said above, a node of this one only has A node of this one has two pointers one pointer Structure: Structure:  Node data  Node data  Next pointer(next node)  Next pointer(next node)  Previous pointer(previous node) Order of accessing Each element of a singly linked list can The doubly linked list provides two-way only be accessed or traversed in one way: access (both from head to tail and from head-to-tail(or tail-to-head) tail to both) Memory usage It uses quite less memory because the It uses memory as twice as singly node of a singly linked list only stores one because its node store both the next pointer pointer as well as the previous pointer Performance RemoveLast operation needs O(n) Searching the known position: Best caseO(1), average case: O(n), worst case O(n) Easy at insertion Cannot access the previous node When the need of saving memory and searching or insertion is not necessary, the singly linked list is more suitable When to use Based implementation:  Stack  Binary tree  Hash table RemoveLast operation needs O(1) Searching the known position: Best case (1), average case: O(1), worst case O(n) More actions in insertion Easy to access the previous node If we require greater efficiency when searching or inserting and memory is not an issue, we should use a doubly linked list  Stack  Queue Page of 35 1.2 Describe Singly and Doubly Linked-List’s operations a) Singly Linked List operations  addFirst(): insert an element saved to the new node before the current head of the linked list The next pointer of the new node will point to the current head The new head now is set to the new node  addLast(): insert an element saved to the new node after the current tail of the linked list The next pointer of the current tail will point to the new node The new tail now is set to the new node  Insert(): traversal and find the previous index node and set the next pointer of this node to the new node The next pointer of the new node is set to the next pointer of the previous node  removeFirst(): set the head pointer to the next node of the current head and set the next pointer of this one to null  removeLast(): traversal sequentially to the previous node of the tail pointer node Set the next pointer of the previous node to null  size(): return the size instance variable which is updated constantly whenever one change of data in the linked list happens  isEmpty(): if size is equal to 0, return true and vice versa  getFirst(): return the first pointer if the linked list Is not null  getLast(): return the last pointer if the linked list Is not null  Search(): traversal by looping via each element to find the element at the correct index and return this one  Sort(): apply the merge sort algorithm to sort the list ascendingly b) Doubly Linked List operation  addFirst(): insert an element saved to the new node before the current head of the linked list The next pointer of the new node will point to the current head and the previous pointer of the current head to the new node The new head now is set to the new node  addLast(): insert an element saved to the new node after the current tail of the linked list The next pointer of the current tail will point to the new node and the previous pointer of the new node point to the current tail The new tail now is set to the new node  Insert(): traversal and find the previous index node and set the next pointer of this node to the new node as well as set the previous of the new node point to the previous index node The next pointer of the new node is set to the next pointer of the previous node and set the previous pointer of the node at the next pointer of the new node to the new node Page of 35  removeFirst(): set the head pointer to the next node of the current head and set the next pointer of this one to null The head node now also is set its previous pointer to null  removeLast(): from the last node, back to the previous node via the previous pointer Set the next pointer of the previous node to null and set the previous pointer of the last node as null The tail pointer now points to the previous node  size(): return the size instance variable which is updated constantly whenever one change of data in the linked list happens  isEmpty(): if size is equal to 0, return true and vice versa  getFirst(): return the first pointer if the linked list Is not null  getLast(): return the last pointer if the linked list Is not null  Search(): traversal by looping via each element to find the element at the correct index and return this one  Sort(): apply the merge sort algorithm to sort the list ascendingly 1.3 Implement Singly and Doubly Linked-List To demonstrate the implementation of the Singly-Doubly linked list, here are some code screenshots with the brief explanation of each function which have been written clearly in the code comment a) Implement Singly Linked-List:  Node class: Figure 1: Node structure of singly linked list  Singly linked list instance variables and constructor: Page of 35 Figure 2: instance variables and constructor of linked list  addLast Operation and addFirst Operation: Figure 3: addLast Operation and addFirst Operation of singly  Insert operation: Page of 35  Search Operation: Figure 4: singly Insert operation Figure 5: Singly Search Operation:  getFirst and getLast operation: Figure 6:getFirst and getLast operation Page 10 of 35 Because the mechanism of insertion sort is it will pick the current node and compare it with all the elements of the sorted side(the side with the last element is the picked node) To follow that in code implementation, the current node will be back to the previous node and the process will be iterated until it doesn’t meet the condition of these two while loops To illustrate, I continue the above process: Swap data of nodes in comparing above Continue setting the current node to the previous node and compare If the previous is less than the current node, swap them If node, skip this loop and continue This process will be executed constantly until there are no previous nodes of the current node(the previous node is null now) Page 21 of 35 Now looking at the position of the initial current node(after the picked node), from this to any previous element is the sorted linked list Thereby, the currentNode will be set to the fourth node and the program continue the while loop until the linked list is absolutely sorted C IMPLEMENT ERROR HANDLING AND REPORT TEST RESULTS Testing Plan In this section, I demonstrate some typical test cases in order to test the efficient feature of my linked list Thereby, I could provide the evaluation based on this testing plan and results: Note Descriptions:  S1 -> the instance of the singly liked list  S2 -> the instance of the doubly linked list No Scope Operation Testing Input Expected Actual Status Type Output Output Singly addLast(T Normal S1[6,9]; S1:[6,9,69] The same Passed LinkedList data) addLast(69) S1.size() = as ADT: Search(2) expected My Singly return 69 output Linked List getLast() S1 return 69 addFirst(T Normal S1[6,9]; S1:[69,6,9] The same Passed data) addFirst(69) S1.size() = as Search(0) expected return 69 output getFirst() return 69 addLast(T Data S1[6,9]; S1: [6,9] S1[6,9] Failed data) validation addLast(); Throw error The program is stopped addFirst(T Data S1[6,9]; S1: [6,9] S1[6,9] Failed data) validation addFirst(); Throw error The program is stopped insert(T data) Data S1[6,9] S1:[6,9] S1:[6,9] Passed validation addLast(69, 3) Throw error Throw error insert(T data) Data S1[6,9] S1:[6,9] S1:[6,9] Passed validation addLast(69, -2) Throw error Throw error Page 22 of 35 insert(T data) Data validation S1[6,9] addLast(“oko”, 2) S1:[6,9] Throw error insert(T data) Data validation S1[6,9] addLast(836655 , null) S1:[6,9] Throw error removeFirst() Data validation S1[] removeFirst() S1:[] Throw error 10 removeLast() Normal S1[6,9,69] removeFirst() S1:[6,9] Return 69 11 search() Normal S1[6,9]; Search(1) S1[6,9] Return 12 search() Data validation S1[6,9]; Search(6) S1[6,9] Throw error addLast(T data) Normal S2[6,9]; addLast(69) 14 addLast(T data) Data validation S2[6,9]; addLast(); S2:[6,9,69] S2.size() = Search(2) return 69 getLast() return 69 S2: [6,9] Throw error 15 getFirst() Data validation S2[]; getFirst S2: [] Throw error 16 addFirst(T data) Data validation S2[6,9]; addFirst(); S2: [6,9] Throw error 13 Doubly LinkedList ADT: My doubly Linked List S1 S1:[6,9] The program is stopped S1:[6,9] The program is stopped The same as expected output S1: [] Throw error The same as expected output S1[6,9] Throw error The same as expected output Failed Failed Passed Passed Passed Passed Passed S2[6,9] Failed The program is stopped The same Passed as expected output S2[6,9] Failed The program Page 23 of 35 17 18 19 20 21 22 23 24 25 26 Question b is stopped insert(T data) Data S2[6,9] S2:[6,9] S2:[6,9] validation addLast(69, 3) Throw error Throw error insert(T data) Data S2[6,9] S2:[6,9] S2:[6,9] validation addLast(69, “2”) Throw error The program is stopped insert(T data) Data S2[6,9] S2:[6,9] S2:[6,9] validation addLast(“oko”, - Throw error The 2) program is stopped isEmpty() Normal S2[1,8,12] Return false Return isEmpty() false insert(T data) Data S2[6,9] S2:[6,9] S2:[6,9] validation addLast(836655 , Throw error The null) program is stopped removeFirst() Data S2[] S2:[] The same validation removeFirst() Throw error as expected output removeLast() Data S2[] S2:[] The same validation removeLast() Throw error as expected output InsertMiddle(T Normal S2[1,2,5,6] S2:[1,2,4,5,6] The same data) InsertMiddle(4) Size + = as expected output InsertMiddle(T Data S2[1,2,4,5] S2[1,2,4,5] The data) Validation InsertMiddle(“3”) Throw error program is stopped InsertMiddle(T Data S2[1,2,4,5] S2[1,2,4,5] The same data) Validation InsertMiddle() Throw error as expected output Passed Failed Failed Passed Failed Passed Passed Passed Failed Passed Page 24 of 35 27 Question c Sort() Normal S2[1,2,6,4]; Sort() 28 Sort() Data S2[]; Validation Sort Throw error 29 Sort() Data validation Throw error S2[1, “5”]; Sort S2[1,2,4,6] The same Passed as expected output The Failed program is stopped The Failed program is stopped Evaluation 2.1 Overall: In the testing plan, 29 test cases have been performed And the number of passed test cases is 17 and the number of failed test cases is 12 2.2 Explain and Solution Explanation: Most failure cases come from the test case which involved the data validation; Specifically, they almost are the checking argument of operation This indicates that my code is very lacking in appropriate checking processes Although the checking conditions such as checking the list is empty, checking the argument is not null in some operations or checking the index argument when comparing with the size of the linked list, are not enough for the stable as well as accurate operation Some of the crucial checking conditions are not implemented such as checking whether the argument is the expected data type, and checking whether the argument is null or not(most of the add operation lack this checking condition) Additionally, one of the important checking conditions in the sort operation is to check whether the data type of all elements in the linked list is the same or not Because I implement a linked list with the generic type(T), it’s possible to add different datatypes into the linked list without declaring the data type Solution:  The good solution now is to enhance the checking argument process for all operations in both singly and doubly linked lists Particularly, start from some checking for the data type of argument to set up fill the checking null argument for all add and insert operations Although the element of the linked list could be null, it should be avoided to ensure the proper working mechanism Additionally, checking the data type of all elements in the linked list also is important for the sorting process Set up the exact exception will be thrown when the argument or the condition comes wrong  Extend and add more test cases to the testing plan to trace and detect more failure cases and then continue providing the appropriate solution Page 25 of 35  Implement more operations to support each other available operations in data normalization D DISCUSS HOW ASYMPTOTIC ANALYSIS CAN BE USED TO ASSESS THE EFFECTIVENESS OF AN ALGORITHM Asymptotic Analysis: Asymptotic analysis refers to the method of estimating an algorithm's time complexity in computational units in order to figure out the program's limits, also widely recognized as "run-time performance." The intention is to determine the best case, worst case, and average case times for completing a specific task Asymptotic Notations And How They Relate To Ideas Of Best, Average And Worst Case:  Big-O notation: The Big-O notation represents a program's worst-case execution time We determine an algorithm's Big-O by estimating how many iterations it will take in the worst-case situation with an input of N We usually consult the Big-O because we must always plan for the worst-case scenario For example, O(log n) specifies the Big-O of a binary search algorithm  Big- Ω (Omega) represents a program's best running time We measure the big-Ω by figuring out the number of iterations a program will perform in the best-case situation given an input of N A Bubble Sort algorithm, for example, has a time complexity of Ω(N) since the list is already sorted in the best-case scenario, and the bubble sort will terminate after the first iteration  Big – Θ Theta notation serves to define the average bound of a program in terms of Time Complexity That is, Big – Θ Theta notation always depicts the average time required by an algorithm for all input parameters That is, Big-Theta notation indicates the average case of an algorithm's temporal complexity  The "little" o notation is used significantly less frequently in complexity analysis Little o is stronger than big O (Chaitanya, 2021); while O suggests no quicker development, o signals absolutely slower growth In contrast, denotes a strictly quicker growth To discuss the asymptotic analysis and assess the effectiveness of an algorithm, here are some examples of algorithms and their asymptotic analysis: a) Example: Assess The Effectiveness Of Merge Sort Algorithm: For example, Merge sort is a pretty fast sorting algorithm which has a time complexity of O(n*log n), it is the algorithm which is basically independent of the status of the input(no matter how the status of input is, the time complexity is not changing) The working mechanism of merge sort is dividing the number of elements into half in every step Because the number is divided in every step, it is the logarithmic mechanism in which log n and each step of dividing the array/list can be demonstrated by log n + 1(at most) Page 26 of 35 When the merge sort takes the middle of any array and subarray to divide them, it takes O(1) because this task only is calculating the middle index And after dividing and sorting, the merging task will take O(n) because it needs to merge N elements Hence, Via all the tasks, the time complexity of merge sort will take O(n*(logn + 1)) Based on the asymptotic annotations in the previous section, cases of merge sort could be demonstrated: Best case[big-Ω ]: Ω(n*log n); Average case[big-Θ]: Θ(n*log n) and worst case[BigO]: O(n*logn) b) Example: Assess The Effectiveness Of the Selection Sort Algorithm: For the second example, the selection sort is a simple implementation but a low-performance sorting algorithm Because of the number of the nested loop(2), it is an O(n2) algorithm which has a quadratic worst case Selection sort iterates over each remaining element in the array for each element in the array(two nested loops) Because there are n items and it executes about n operations on each element, the time complexity is O(n2) Although each inner loop isn’t looping via the entire array (of size n), it’s looping via a quantity that’s linearly proportional to the size of the array So, the time complexity of each inner loop is estimated as O(n) It partly means that no matter how the status of input is, the selection sort still takes O(n2) time complexity Based on the asymptotic annotations in the previous section, I could analyse the complexity of this algorithm: Best Case When there is no need for sorting, i.e the array has already been sorted, complexity ensues Best-case[big-Ω]: Ω(n2) Average Case Complexity comes when the array index is messing Average-case[big-Θ]:Θ(n2) Worst-case complexity comes when the array index is in reverse order Worst-case[Big-O]:O(n2) c) Example: Assess The Effectiveness Of the Insertion Sort Algorithm: The last example is the insertion sort algoritm – one of the simple-implement algorithms but has a pretty low performance With n input size of elements to be sorted, and applying two nested loops that means when using this algorithm, we need to deal with quadratic effort or O(n²) time complexity except in the best case This also is the case if the total number of items in both the outer and inner loops adds up to a value that does so linearly In the best-case scenario, the program discovers the insertion position at the top element with one comparison(It means each time taking the current element to compare with the sorted side, the current element is already located in the right place, no need to swap Simply put, this case refers to the list input which is already almost sorted), so there are 1+1+1+ (n times) = O(n) time complexity for the best case The average and the worst case are also estimated with O(n2) because except for the best case, any other cases in which the program also needs to execute two nested loops even when the half list is already sorted, the program will run approximately n time.Based Page 27 of 35 on the asymptotic annotations in the previous section, I could analyse the complexity of this algorithm: Best-case[big-Ω]: When the list is already sorted: Ω(n) Average-case[big-Θ]: When the program needs to perform a traversal via half-three-fourths of the list: Θ(n2) Worst-case[big-0]: When the list must be fully traversed(or in reverse order): 0(n2); Some Examples to Clarify O(1), O(n), O(N log N) a) O(1) – constant time O(1) signifies an algorithm that always executes in the same amount of time (or space), regardless of the size of the input data set (Woltmann, 2020) For example, in merge sort, one of the crucial steps is taking the divided index by calculating the middle index of an array: No matter how the status input is and how many elements of input(1000 or 10000 or even million or more), the complexity of this calculation is always O(1) because it only needs to perform one calculation(sum and divide) b) O(n) – linear time O(n) indicates that the complexity grows linearly with the number of elements n because it commonly belongs to algorithms that perform traversal over every element in a collection: If n doubles, then the time approximately doubles, too For example, to remove the last node from the available singly linked list, the program needs to perform a traversal from the head pointer to the previous node before the last node via the next pointer of each one Page 28 of 35 Although in real implementation, it actual perform (n-1) traversal times, when applying the big O notation, this is only considered the worst term and it is O(n) Even when the actual times are (n2) and even (n-3) the appropriate complexity is still O(n) c) O(n2) – Quandratic time The function O(n2) indicates a function whose complexity is proportional to the square of the input size Increasing the complexity by adding additional nested iterations across the input Bubble sort is the best example to illustrate the Quadratic time complexity The time complexity required to solve the issue grows significantly In the average and worst case, this algorithm has two stacked loops and a quadratic running time: O (n2) d) O(logn) – Logarithmic time O(log n) denotes an algorithm whose complexity grew logarithmically as the input size increased As a direct consequence, O(log n) algorithms scale quite well, and taking bigger inputs is far less likely to create performance issues Simply put, Logarithmic time complexities are commonly associated with algorithms that divide problems in half every time (Newton, 2017) Here is an example of O(logn) – the isContain() operation in my doubly linked list Page 29 of 35 This operation uses a binary search algorithm to determine if the input list includes a checkeddata argument In simple terms, it separates the doubly linked list in halves on each loop until the node data is discovered or the final member is read e) O(N log N) – Linearithmic O(N log N) denotes that logn operations will take place n times O(N log N) time is prevalent in recursive sorting algorithms, binary tree sorting algorithms, and most other forms of sorts Any method that employs O(nlogn) space will almost certainly be noticeable The familiar example of O(nlogn) is quick sort: Simply put, Quick sort apply Divide and Conquer in the partition mechanism and this process take O(logn) for each time However, to divide and conquer on all elements of array, it need to reperform this task n times So, in the best case and average case, quick sort take O(nlogn) complexity for the sorting process f) O(2n) – Exponential time This complexity denotes that algorithm double every time the input grows the size For example, the Fibonacci algorithm has two implementation ways In the worst implement way of Fibonacci, the number of recusize and calculations is doubled every time the Fibonacci number is increased(even only unit) This complexity is always avoided when implementing an algorithm g) O(n!) – Factorial time O(n!) is the sum of all positive integer integers less than one (Woltmann, 2020) It is the "worst" available complexity For instance, poker cards have 52 cards, with 52! different orderings of cards after shuffling This will create an uncountable number So, this is unnecessary to take the Page 30 of 35 algorithm code example for this one because no algorithm reaches this complexity(If one algorithm reaches O(n!), it will not be considered an algorithm anymore) E DETERMINE TWO WAYS IN WHICH THE EFFICIENCY OF AN ALGORITHM CAN BE MEASURED, ILLUSTRATING YOUR ANSWER WITH AN EXAMPLE An algorithm's complexity and efficiency could be metricated using two measures in terms of Big O, Time complexity and Space complexity:  Time complexity: The time complexity of an algorithm is the proportion of operations that it performs to achieve its target in regard to the size of the input (assuming that each operation spends the same amount of time) (admin, 2019) The most efficient algorithm is the one which accomplishes the task in the least amount of operations  Space complexity: The total space(Ram, HDD etc) utilized or required by an algorithm for its operation, for varying input sizes, is denoted by its space complexity When assessing algorithms as good, medium or bad, this is essential to assess the input size n — the number of input elements Input Size is the total number of items in the input specified as the input size; for example, the input size of a sorting issue is the total number of items to sort We characterize the input size n appropriately for a specific situation We want to make a logical prediction about how the algorithm's time complexity relates to the input size n This is the progression of growth: how the algorithm will scale and behave given the input size n For example, follow the code below: In the above example, I make an array of size n from the input So the space complexity of the above code is on the order of "n," which means that as n increases, so will the space required Although I create an n variable, the program requires some space for this task The Space Complexity of the algorithm also refers to the total amount of space required for the method The time required by the code above is also dependent on the computational performance of the system being used or the speed of the programming language used, but in the term of assessing time complexity, I disregard such external factorial and just consider the number of times this loop is processed in relation to the input size Assuming the time taken to execute one loop is one millisecond, then the time taken to execute n loop times like the code example above will be n milliseconds Page 31 of 35 However, the example above showed how measuring the efficiency according to counting the number of loop times based on the n input To dive into how time complexity and space complexity could measure the efficiency of an algorithm with asymptotic notation relating to the ideas of the best, average and worst case, Here is an example: Actually, this is the continuation of the previous example Here, I create a new method which performs a linear search algorithm implemented with recursion The working mechanism of this method is quite simple, it will perform recursion until the two cases happen The true case is when the check appears in the array and the false case is when traversal via all the elements and no element is matched with the check argument Basically, in this example, the time complexity and space complexity are still partly based on the input n However, the time and space complexity now could be different depending on the case of the check argument Now, the time complexity and space complexity as well as the effiency could change with the situation:  Best case of this is assuming the input array is [5, 2, 3, 4, 1] and the check argument is 5, the isContain now will execute only one time(only the operation == is performed) and the recursive times stored in the stack memory are also only one, it means the time complexity is Ω(1) and space complexity now is O(1)(One flag is declared)  If the best case could be determined simply just based on the ‘check’ argument, the complexity of the average case and worst case are unstable and grow as the input size grows For instance, if the size of the array now is 100 and the ‘check’ value is found at the 56th position, the running time of the program now is 56 and the number of isContain method calls stored in the stack is also 56 calls However, It is still considered as O(n) But in case the size of the array is million and the ‘check’ position is 999 thousand, the program running time also grows so much when compared to the previous time but the complexity still is considered O(n) Even in the worst case, the array size is 10 billion and the ‘check’ does not exist in the array It means the program has to traversal to all elements, the complexity now grows extremely high and it may cause a stack overflow Page 32 of 35 So, maybe the complexity of this one also is considered as O(n) linear time but the above example showed how the actual complexity grows when the input size grows Although the best case will provide good result of complexity, this is important to care about the average and worst case in which the efficiency of the algorithm will be actually assessed via the time complexity and space complexity These two examples also can partly determine that the order of growth of the input directly affects the program running time and space requirements In other words, when assessing the efficiency of an algorithm, this is actually a process to estimate how the time or the space spent by an algorithm will increase/decrease after increasing/decreasing the input size Moreover, in some cases to enhance the efficiency of the algorithm, we need to trade-off between the time complexity and the space complexity and this concept will be discussed in the next section F TRADE-OFF WHEN SPECIFYING AN ADT In this section, I would like to discuss the Space-time trade-off in a general sense stating that “You can decrease the time complexity of your algorithm in exchange for greater space, or consume lesser space in exchange for slower executions.” Simply put, It is only true that certain problems when specifying an ADT may be resolved in many algorithms, with some algorithms taking less time than others while using less storage space (Singhal, 2017) Although the algorithm considered as the best practice uses less memory and operate more quickly while generating output are the best ones to use to tackle a given problem, in reality, it is not always feasible to accomplish both of these goals As was previously said, there may be several ways to fix a single issue So, the trade-off between them is worth considering way For example, when designing the sort operation for the doubly linked list, there are many kinds of available sorting algorithms All of them are invented with the purpose of solving the problem which relates to sorting tasks However, each of them also has time complexity and space complexity This is because when designing them, the developer also needed to trade-off between time-space to make the efficiency as well as the suitable for each scenario Back to my example, I have considered two sorting algorithms(selection sort and merge) when designing the sort operation for my own doubly linked list Two of them also is efficient in sorting the linked list Merge sort is quite a fast sorting algorithm, which has a fast running time complexity because its time complexity is basically independent of the input size It means no matter how many input elements of a linked list, the time complexity of this one always is O(N log N)(the good speed of a sorting algorithm) However, achieving this speed requires an auxiliary list, and this leads to the space complexity of merge sort being O(n)(not too bad but not good space complexity) Look at the other side, Selection sort is really bad at the time complexity Its working mechanism is entirely dependent on two nested loops; so, in any case of input size, it also leads to the O(n2) time complexity(bad complexity for a sorting algorithm) However, it seems to not require any space in memory, it executes mostly on direct Page 33 of 35 the elements from list input So, it provides a very good space complexity: O(1) When specifying the sort operation in my doubly linked list; I have needed the space required as little as possible and have not really needed the fast running time for this operation Although the fact is I not only want the sort operation to have a good speed but I also want it requires so much less space in memory, this hobby is impossible As a consequence, I have made the decision to trade off the speed of program running time to take less space required for the sort operation For a reality example, although they are frequently used in game programming, the sine and cosine functions are slow but seize less memory (Suh, 2019) Because now every offline game has many package resources and each resource is quite a weight and when the game is run, loading these resources seize quite much memory So, programmers frequently have to optimize the precomputation values of sine and cosine that they anticipate occurring throughout the game rather than computing them on the fly while playing so that they seize so less space in memory; although they are still slow By Trading off timeconsuming calculations, the developers add some space to the program but also speed it up As a conclusion, having efficiency in both space and time, however if it may seem like a minor concern, is actually particularly vital Obviously, there are many negotiations that may be made depending on the circumstances, but typically, for most programmers, time-performance is of the key, but for places where memory is limited, of course, space-complexity is the issue G CONCLUSION To sum up, these necessary tasks are completed and discussed in this report: implementing a complex algorithm and data structure(singly and doubly linked list) that consists the valid operation I have implemented error handling and report test results I have discussed how asymptotic analysis can be used to assess the effectiveness of an algorithm and provided illustrated example Providing illustrated a specific data structure for a First In First Out (FIFO) queue Determining two ways in which the efficiency of an algorithm can be measured also has been discussed with illustrating my answer with an example in this assignment I also discussed about space-time trade-off when specifying an adt After this report, I also get some basic knowledge about data structure and sorting algorithms which will help me a lot in the future References admin, 2019 Time and Space Complexity Analysis of Algorithm [Online] Available at: https://afteracademy.com/blog/time-and-space-complexity-analysis-of-algorithm Chaitanya, S., 2021 Data Structure Asymptotic Notation [Online] Available at: https://beginnersbook.com/2018/10/ds-asymptotic-notation/ Newton, D., 2017 Learning Big O Notation with O(n) complexity [Online] Available at: https://lankydan.dev/2017/04/23/learning-big-o-notation-with-on-complexity Page 34 of 35 Palaniappan, A., 2021 Insertion Sort [Online] Available at: https://linuxhint.com/insertion-sort/ Singhal, A., 2017 Getting the best of both worlds : Space-time trade-offs in algorithms [Online] Available at: https://hackernoon.com/getting-the-best-of-both-worlds-space-time-trade-offs-in-algorithmsb62116aaf3ef Woltmann, S., 2020 Big O Notation and Time Complexity – Easily Explained [Online] Available at: https://www.happycoders.eu/algorithms/big-o-notation-time-complexity/ Page 35 of 35 ... Passed S2[6,9] Failed The program is stopped The same Passed as expected output S2[6,9] Failed The program Page 23 of 35 17 18 19 20 21 22 23 24 25 26 Question b is stopped insert(T data) Data S2[6,9]... output InsertMiddle(T Data S2[1 ,2, 4,5] S2[1 ,2, 4,5] The data) Validation InsertMiddle(“3”) Throw error program is stopped InsertMiddle(T Data S2[1 ,2, 4,5] S2[1 ,2, 4,5] The same data) Validation InsertMiddle()... Passed Page 24 of 35 27 Question c Sort() Normal S2[1 ,2, 6,4]; Sort() 28 Sort() Data S2[]; Validation Sort Throw error 29 Sort() Data validation Throw error S2[1, “5”]; Sort S2[1 ,2, 4,6] The same

Ngày đăng: 30/10/2022, 21:35

Tài liệu cùng người dùng

Tài liệu liên quan