Principles of Analysis of Algorithms and Sorting Methods

Principles of Analysis of Algorithms and Sorting Methods:

Background information:

An algorithm can be described as a properly characterized calculating process that is able to take in some figures as inputs and give out some figures as outputs (Cooper & Linda, 2008). An algorithm provides a gradual method for solving problems of calculations.Algorithms differ from computer programs because they do not rely on a specific programming language, system or machine. They are arithmetic units that have unlimited random access memory and a limitless word size.

Methodology:

Empirical methods are used for studying the performance of algorithms that are not able to be analyzed using techniques from computational hypotheses.Empirical methods are the most feasible ways of studying the performance of algorithms (Matthew, 1997). Empirical analysis is able to by pass some of the problems that can befall a purely theoretical approach.There are several advantages of empirical science and one of them is that it does not depend upon proving hard worst-case and average-case theorems.It is also has the ability to focus on typical analysis in addition to having zero restrictions to simple and unreal distribution of problems.

According to( Steven, 2005), empirical concept is the opposite of hypotheses,but despite this difference,empirical concept has certain theoretical bits to it.In the early stages of the empirical concept of an algorithm,there are a few tests that are run just to see what happens.What follows is a hypotheses which is then tested empirically.After everything is done,one may put together a very clear solid picture that tries to elaborate the importance of certain aspects.This sort of improvement illustrates that even though empirical is practical,its eventual conclusion is theoretical because it has to be elaborated.One thing that is mostly misleading is the word empirical.

Results:

As indicated in the solution to the problem E2, the absurd instance for the contiguous insertion sort is at the time when the list provided in the reversed order. The normally would need k-1 relations and k+1 problem, for the kth entry to the provided list, with the n- values being checked, providing the most absurd case relation count of;

∑n/k-2 (k-1)-1/2(n-1)n.

Counting every key enthused provides a total assignment count for this scenario of ½(n-1)n.The contiguous list sorting illustration program permits diverse contiguous sorting schedules to be applied without changing the program.

Program: contiguous Sort test (Key in, output).

{prior: NillAfter: the number of principal comparisons, the Computer time, and the coursework for the contiguous sorting program systems has been computed.

Sorting:

When temp.key>Lentry{mid}.key therefore

the bottom:=mid+1

Otherwise top:=mid end;

For the j:=1 downto mid + 1 do start Assignment:=Assignment+1; entry(j) :=entry {j-1}.

The summary of the progress:

Processing In-state Old out state New out state Work list

J3 {} {} {b,d} {b1, b2}

J1 {b,d} {} {} (b2)

J2 {b,d} {} {a,b} (b1)

J1 {a, b,d} {} {} ()

Table 1

Discussion:

Theoretical analysis on the other hand uses a high level illustration of the algorithm instead of an application.In this case it differs from the empirical method since here, the algorithm must be implemented and the implementation is very difficult as stated by (Steven, 2005). The theoretical approach distinguishes the running time as a function of the input size while the empirical method’s results may not specify the running time on other inputs not involved in the experiment.The theoretical approach does not account for all the inputs used therein.The theoretical approach allows for the evaluation of the speed of an algorithm irrespective of the software or hardware environment while the empirical approach must use the same hardware and software environment when comparing two algorithms.

The first step in writing the program involves creating several files of numerals that are to be used in the testing of the sorting method.The files should be of various sizes and hey should be made in order in the following ways, reverse order, partially in order and in random order.Keeping all the data into files is of great advantage because similar data can be used in testing the various sorting methods and this makes it easier to contrast their performance.

The program should be menu written and this is because it will be used to evaluate the various sorting methods.The other alternative entails reading a file of integers into a list or running one of the different sorting methods on the list in order to print the sorted or unsorted list.Subsequently, after all that has been done, the list is got rid off in order to allow for the starting with the same input data during later evaluation (Keith & Kennedy, 2001).

This can be achieved by arranging the program so that itb is able to read the data file every time sorting begins.Codes are then inserted into in order to facilitate the computation and printing of the CPU time, the number of comparison of keys and the number of tasks of list entries during sorting a list.Counting the number of comparisons and tasks needs setting up global variables every time a major contrasts is made. According to Flemming and Chris (2005) the contiguous list is packaged and developed to include the contiguous version of insertion sort and gathering statistics on the performance of contiguous insertion sort for later comparison with other methods.The next stage involves using the linked list to include the linked version of the insertion sort for further contrast with other methods.The contiguous list sorting demo should be able to allow for the different contiguous sorting packages to be used without a program change.

Conclusion:

Generally the project was successful, the only significant alterations to factor in the demonstration of the program applying linked lists it to alter the package list to the linked package list and the restructure the introduction illustrating the linked lists.

References:

Cooper, D. & Linda, T(2008). Engineering a Compiler. Salisbury, MD: Beacon Publishers.

Steven, S. (2005). Advanced Compiler and Implementation. New York, NY: Taylor & Francis Group.

Matthew, H. (1997) Flow Analysis of Computer Programs. Oaks, CA: Sage Publications, Inc.

Keith, D. and Kennedy, J. (2001). Principles of Program Analysis. University of Chicago Press, Chicago, Ill.

Flemming, N and Chris, H.(2005). Data Flow Analysis. MIT Press, Cambridge, MA.

Sanyal, P. (2007). Iterative data flow analysis. Oxford: Blackwell Publishing.