Chapter: Introduction to the Design and Analysis of Algorithms

Fundamentals of Algorithmic Problem Solving

Let us start by reiterating an important point made in the introduction to this chapter:

We can consider algorithms to be procedural solutions to problems.

These solutions are not answers but specific instructions for getting answers. It is this emphasis on precisely defined constructive procedures that makes computer science distinct from other disciplines. In particular, this distinguishes it from the-oretical mathematics, whose practitioners are typically satisfied with just proving the existence of a solution to a problem and, possibly, investigating the solution’s properties.

We now list and briefly discuss a sequence of steps one typically goes through in designing and analyzing an algorithm (Figure 1.2).

Understanding the Problem

From a practical perspective, the first thing you need to do before designing an algorithm is to understand completely the problem given. Read the problem’s description carefully and ask questions if you have any doubts about the problem, do a few small examples by hand, think about special cases, and ask questions again if needed.

There are a few types of problems that arise in computing applications quite often. We review them in the next section. If the problem in question is one of them, you might be able to use a known algorithm for solving it. Of course, it helps to understand how such an algorithm works and to know its strengths and weaknesses, especially if you have to choose among several available algorithms. But often you will not find a readily available algorithm and will have to design your own. The sequence of steps outlined in this section should help you in this exciting but not always easy task.

An input to an algorithm specifies an instance of the problem the algorithm solves. It is very important to specify exactly the set of instances the algorithm needs to handle. (As an example, recall the variations in the set of instances for the three greatest common divisor algorithms discussed in the previous section.) If you fail to do this, your algorithm may work correctly for a majority of inputs but crash on some “boundary” value. Remember that a correct algorithm is not one that works most of the time, but one that works correctly for all legitimate inputs.

Do not skimp on this first step of the algorithmic problem-solving process; otherwise, you will run the risk of unnecessary rework.

Ascertaining the Capabilities of the Computational Device

Once you completely understand a problem, you need to ascertain the capabilities of the computational device the algorithm is intended for. The vast majority of 

fundamentals of algorithmic problem solving javatpoint

algorithms in use today are still destined to be programmed for a computer closely resembling the von Neumann machine—a computer architecture outlined by the prominent Hungarian-American mathematician John von Neumann (1903– 1957), in collaboration with A. Burks and H. Goldstine, in 1946. The essence of this architecture is captured by the so-called random-access machine ( RAM ). Its central assumption is that instructions are executed one after another, one operation at a time. Accordingly, algorithms designed to be executed on such machines are called sequential algorithms .

The central assumption of the RAM model does not hold for some newer computers that can execute operations concurrently, i.e., in parallel. Algorithms that take advantage of this capability are called parallel algorithms . Still, studying the classic techniques for design and analysis of algorithms under the RAM model remains the cornerstone of algorithmics for the foreseeable future.

Should you worry about the speed and amount of memory of a computer at your disposal? If you are designing an algorithm as a scientific exercise, the answer is a qualified no. As you will see in Section 2.1, most computer scientists prefer to study algorithms in terms independent of specification parameters for a particular computer. If you are designing an algorithm as a practical tool, the answer may depend on a problem you need to solve. Even the “slow” computers of today are almost unimaginably fast. Consequently, in many situations you need not worry about a computer being too slow for the task. There are important problems, however, that are very complex by their nature, or have to process huge volumes of data, or deal with applications where the time is critical. In such situations, it is imperative to be aware of the speed and memory available on a particular computer system.

Choosing between Exact and Approximate Problem Solving

The next principal decision is to choose between solving the problem exactly or solving it approximately. In the former case, an algorithm is called an exact algo-rithm ; in the latter case, an algorithm is called an approximation algorithm . Why would one opt for an approximation algorithm? First, there are important prob-lems that simply cannot be solved exactly for most of their instances; examples include extracting square roots, solving nonlinear equations, and evaluating def-inite integrals. Second, available algorithms for solving a problem exactly can be unacceptably slow because of the problem’s intrinsic complexity. This happens, in particular, for many problems involving a very large number of choices; you will see examples of such difficult problems in Chapters 3, 11, and 12. Third, an ap-proximation algorithm can be a part of a more sophisticated algorithm that solves a problem exactly.

Algorithm Design Techniques

Now, with all the components of the algorithmic problem solving in place, how do you design an algorithm to solve a given problem? This is the main question this book seeks to answer by teaching you several general design techniques.

What is an algorithm design technique?

An algorithm design technique (or “strategy” or “paradigm”) is a general approach to solving problems algorithmically that is applicable to a variety of problems from different areas of computing.

Check this book’s table of contents and you will see that a majority of its chapters are devoted to individual design techniques. They distill a few key ideas that have proven to be useful in designing algorithms. Learning these techniques is of utmost importance for the following reasons.

First, they provide guidance for designing algorithms for new problems, i.e., problems for which there is no known satisfactory algorithm. Therefore—to use the language of a famous proverb—learning such techniques is akin to learning to fish as opposed to being given a fish caught by somebody else. It is not true, of course, that each of these general techniques will be necessarily applicable to every problem you may encounter. But taken together, they do constitute a powerful collection of tools that you will find quite handy in your studies and work.

Second, algorithms are the cornerstone of computer science. Every science is interested in classifying its principal subject, and computer science is no exception. Algorithm design techniques make it possible to classify algorithms according to an underlying design idea; therefore, they can serve as a natural way to both categorize and study algorithms.

Designing an Algorithm and Data Structures

While the algorithm design techniques do provide a powerful set of general ap-proaches to algorithmic problem solving, designing an algorithm for a particular problem may still be a challenging task. Some design techniques can be simply inapplicable to the problem in question. Sometimes, several techniques need to be combined, and there are algorithms that are hard to pinpoint as applications of the known design techniques. Even when a particular design technique is ap-plicable, getting an algorithm often requires a nontrivial ingenuity on the part of the algorithm designer. With practice, both tasks—choosing among the general techniques and applying them—get easier, but they are rarely easy.

Of course, one should pay close attention to choosing data structures appro-priate for the operations performed by the algorithm. For example, the sieve of Eratosthenes introduced in Section 1.1 would run longer if we used a linked list instead of an array in its implementation (why?). Also note that some of the al-gorithm design techniques discussed in Chapters 6 and 7 depend intimately on structuring or restructuring data specifying a problem’s instance. Many years ago, an influential textbook proclaimed the fundamental importance of both algo-rithms and data structures for computer programming by its very title: Algorithms + Data Structures = Programs [Wir76]. In the new world of object-oriented programming, data structures remain crucially important for both design and analysis of algorithms. We review basic data structures in Section 1.4.

Methods of Specifying an Algorithm

Once you have designed an algorithm, you need to specify it in some fashion. In Section 1.1, to give you an example, Euclid’s algorithm is described in words (in a free and also a step-by-step form) and in pseudocode. These are the two options that are most widely used nowadays for specifying algorithms.

Using a natural language has an obvious appeal; however, the inherent ambi-guity of any natural language makes a succinct and clear description of algorithms surprisingly difficult. Nevertheless, being able to do this is an important skill that you should strive to develop in the process of learning algorithms.

Pseudocode is a mixture of a natural language and programming language-like constructs. Pseudocode is usually more precise than natural language, and its usage often yields more succinct algorithm descriptions. Surprisingly, computer scientists have never agreed on a single form of pseudocode, leaving textbook authors with a need to design their own “dialects.” Fortunately, these dialects are so close to each other that anyone familiar with a modern programming language should be able to understand them all.

This book’s dialect was selected to cause minimal difficulty for a reader. For the sake of simplicity, we omit declarations of variables and use indentation to show the scope of such statements as for , if , and while . As you saw in the previous section, we use an arrow “ ← ” for the assignment operation and two slashes “ // ” for comments.

In the earlier days of computing, the dominant vehicle for specifying algo-rithms was a flowchart , a method of expressing an algorithm by a collection of connected geometric shapes containing descriptions of the algorithm’s steps. This representation technique has proved to be inconvenient for all but very simple algorithms; nowadays, it can be found only in old algorithm books.

The state of the art of computing has not yet reached a point where an algorithm’s description—be it in a natural language or pseudocode—can be fed into an electronic computer directly. Instead, it needs to be converted into a computer program written in a particular computer language. We can look at such a program as yet another way of specifying the algorithm, although it is preferable to consider it as the algorithm’s implementation.

Proving an Algorithm’s Correctness

Once an algorithm has been specified, you have to prove its correctness . That is, you have to prove that the algorithm yields a required result for every legitimate input in a finite amount of time. For example, the correctness of Euclid’s algorithm for computing the greatest common divisor stems from the correctness of the equality gcd (m, n) = gcd (n, m mod n) (which, in turn, needs a proof; see Problem 7 in Exercises 1.1), the simple observation that the second integer gets smaller on every iteration of the algorithm, and the fact that the algorithm stops when the second integer becomes 0.

For some algorithms, a proof of correctness is quite easy; for others, it can be quite complex. A common technique for proving correctness is to use mathemati-cal induction because an algorithm’s iterations provide a natural sequence of steps needed for such proofs. It might be worth mentioning that although tracing the algorithm’s performance for a few specific inputs can be a very worthwhile activ-ity, it cannot prove the algorithm’s correctness conclusively. But in order to show that an algorithm is incorrect, you need just one instance of its input for which the algorithm fails.

The notion of correctness for approximation algorithms is less straightforward than it is for exact algorithms. For an approximation algorithm, we usually would like to be able to show that the error produced by the algorithm does not exceed a predefined limit. You can find examples of such investigations in Chapter 12.

Analyzing an Algorithm

We usually want our algorithms to possess several qualities. After correctness, by far the most important is efficiency . In fact, there are two kinds of algorithm efficiency: time efficiency , indicating how fast the algorithm runs, and space ef-ficiency , indicating how much extra memory it uses. A general framework and specific techniques for analyzing an algorithm’s efficiency appear in Chapter 2.

Another desirable characteristic of an algorithm is simplicity . Unlike effi-ciency, which can be precisely defined and investigated with mathematical rigor, simplicity, like beauty, is to a considerable degree in the eye of the beholder. For example, most people would agree that Euclid’s algorithm is simpler than the middle-school procedure for computing gcd (m, n) , but it is not clear whether Eu-clid’s algorithm is simpler than the consecutive integer checking algorithm. Still, simplicity is an important algorithm characteristic to strive for. Why? Because sim-pler algorithms are easier to understand and easier to program; consequently, the resulting programs usually contain fewer bugs. There is also the undeniable aes-thetic appeal of simplicity. Sometimes simpler algorithms are also more efficient than more complicated alternatives. Unfortunately, it is not always true, in which case a judicious compromise needs to be made.

Yet another desirable characteristic of an algorithm is generality . There are, in fact, two issues here: generality of the problem the algorithm solves and the set of inputs it accepts. On the first issue, note that it is sometimes easier to design an algorithm for a problem posed in more general terms. Consider, for example, the problem of determining whether two integers are relatively prime, i.e., whether their only common divisor is equal to 1. It is easier to design an algorithm for a more general problem of computing the greatest common divisor of two integers and, to solve the former problem, check whether the gcd is 1 or not. There are situations, however, where designing a more general algorithm is unnecessary or difficult or even impossible. For example, it is unnecessary to sort a list of n numbers to find its median, which is its n/ 2 th smallest element. To give another example, the standard formula for roots of a quadratic equation cannot be generalized to handle polynomials of arbitrary degrees.

As to the set of inputs, your main concern should be designing an algorithm that can handle a set of inputs that is natural for the problem at hand. For example, excluding integers equal to 1 as possible inputs for a greatest common divisor algorithm would be quite unnatural. On the other hand, although the standard formula for the roots of a quadratic equation holds for complex coefficients, we would normally not implement it on this level of generality unless this capability is explicitly required.

If you are not satisfied with the algorithm’s efficiency, simplicity, or generality, you must return to the drawing board and redesign the algorithm. In fact, even if your evaluation is positive, it is still worth searching for other algorithmic solutions. Recall the three different algorithms in the previous section for computing the greatest common divisor: generally, you should not expect to get the best algorithm on the first try. At the very least, you should try to fine-tune the algorithm you already have. For example, we made several improvements in our implementation of the sieve of Eratosthenes compared with its initial outline in Section 1.1. (Can you identify them?) You will do well if you keep in mind the following observation of Antoine de Saint-Exupery,´ the French writer, pilot, and aircraft designer: “A designer knows he has arrived at perfection not when there is no longer anything to add, but when there is no longer anything to take away.” 1

Coding an Algorithm

  Most algorithms are destined to be ultimately implemented as computer pro-grams. Programming an algorithm presents both a peril and an opportunity. The peril lies in the possibility of making the transition from an algorithm to a pro-gram either incorrectly or very inefficiently. Some influential computer scientists strongly believe that unless the correctness of a computer program is proven with full mathematical rigor, the program cannot be considered correct. They have developed special techniques for doing such proofs (see [Gri81]), but the power of these techniques of formal verification is limited so far to very small programs.

As a practical matter, the validity of programs is still established by testing. Testing of computer programs is an art rather than a science, but that does not mean that there is nothing in it to learn. Look up books devoted to testing and debugging; even more important, test and debug your program thoroughly whenever you implement an algorithm.

Also note that throughout the book, we assume that inputs to algorithms belong to the specified sets and hence require no verification. When implementing algorithms as programs to be used in actual applications, you should provide such verifications.

Of course, implementing an algorithm correctly is necessary but not sufficient: you would not like to diminish your algorithm’s power by an inefficient implemen-tation. Modern compilers do provide a certain safety net in this regard, especially when they are used in their code optimization mode. Still, you need to be aware of such standard tricks as computing a loop’s invariant (an expression that does not change its value) outside the loop, collecting common subexpressions, replac-ing expensive operations by cheap ones, and so on. (See [Ker99] and [Ben00] for a good discussion of code tuning and other issues related to algorithm program-ming.) Typically, such improvements can speed up a program only by a constant factor, whereas a better algorithm can make a difference in running time by orders of magnitude. But once an algorithm is selected, a 10–50% speedup may be worth an effort.

A working program provides an additional opportunity in allowing an em-pirical analysis of the underlying algorithm. Such an analysis is based on timing the program on several inputs and then analyzing the results obtained. We dis-cuss the advantages and disadvantages of this approach to analyzing algorithms in Section 2.6.

In conclusion, let us emphasize again the main lesson of the process depicted in Figure 1.2:

As a rule, a good algorithm is a result of repeated effort and rework.

Even if you have been fortunate enough to get an algorithmic idea that seems perfect, you should still try to see whether it can be improved.

Actually, this is good news since it makes the ultimate result so much more enjoyable. (Yes, I did think of naming this book The Joy of Algorithms .) On the other hand, how does one know when to stop? In the real world, more often than not a project’s schedule or the impatience of your boss will stop you. And so it should be: perfection is expensive and in fact not always called for. Designing an algorithm is an engineering-like activity that calls for compromises among competing goals under the constraints of available resources, with the designer’s time being one of the resources.

In the academic world, the question leads to an interesting but usually difficult investigation of an algorithm’s optimality . Actually, this question is not about the efficiency of an algorithm but about the complexity of the problem it solves: What is the minimum amount of effort any algorithm will need to exert to solve the problem? For some problems, the answer to this question is known. For example, any algorithm that sorts an array by comparing values of its elements needs about n log 2 n comparisons for some arrays of size n (see Section 11.2). But for many seemingly easy problems such as integer multiplication, computer scientists do not yet have a final answer.

Another important issue of algorithmic problem solving is the question of whether or not every problem can be solved by an algorithm. We are not talking here about problems that do not have a solution, such as finding real roots of a quadratic equation with a negative discriminant. For such cases, an output indicating that the problem does not have a solution is all we can and should expect from an algorithm. Nor are we talking about ambiguously stated problems. Even some unambiguous problems that must have a simple yes or no answer are “undecidable,” i.e., unsolvable by any algorithm. An important example of such a problem appears in Section 11.3. Fortunately, a vast majority of problems in practical computing can be solved by an algorithm.

Before leaving this section, let us be sure that you do not have the misconception—possibly caused by the somewhat mechanical nature of the diagram of Figure 1.2—that designing an algorithm is a dull activity. There is nothing further from the truth: inventing (or discovering?) algorithms is a very creative and rewarding process. This book is designed to convince you that this is the case.

Exercises 1.2

             Old World puzzle A peasant finds himself on a riverbank with a wolf, a goat, and a head of cabbage. He needs to transport all three to the other side of the river in his boat. However, the boat has room for only the peasant himself and one other item (either the wolf, the goat, or the cabbage). In his absence, the wolf would eat the goat, and the goat would eat the cabbage. Solve this problem for the peasant or prove it has no solution. (Note: The peasant is a vegetarian but does not like cabbage and hence can eat neither the goat nor the cabbage to help him solve the problem. And it goes without saying that the wolf is a protected species.)

            New World puzzle There are four people who want to cross a rickety bridge; they all begin on the same side. You have 17 minutes to get them all across to the other side. It is night, and they have one flashlight. A maximum of two people can cross the bridge at one time. Any party that crosses, either one or two people, must have the flashlight with them. The flashlight must be walked back and forth; it cannot be thrown, for example. Person 1 takes 1 minute to cross the bridge, person 2 takes 2 minutes, person 3 takes 5 minutes, and person 4 takes 10 minutes. A pair must walk together at the rate of the slower person’s pace. (Note: According to a rumor on the Internet, interviewers at a well-known software company located near Seattle have given this problem to interviewees.)

            Which of the following formulas can be considered an algorithm for comput-ing the area of a triangle whose side lengths are given positive numbers a , b , and c ?

fundamentals of algorithmic problem solving javatpoint

            Write pseudocode for an algorithm for finding real roots of equation ax 2 + bx + c = 0 for arbitrary real coefficients a, b, and c. (You may assume the availability of the square root function sqrt (x). )

            Describe the standard algorithm for finding the binary representation of a positive decimal integer

                     in English.

                     in pseudocode.

            Describe the algorithm used by your favorite ATM machine in dispensing cash. (You may give your description in either English or pseudocode, which-ever you find more convenient.)

            a.  Can the problem of computing the number π be solved exactly?

                     How many instances does this problem have?

Look up an algorithm for this problem on the Internet.

                                                                    Give an example of a problem other than computing the greatest common divisor for which you know more than one algorithm. Which of them is simpler? Which is more efficient?

                                                                    Consider the following algorithm for finding the distance between the two closest elements in an array of numbers.

ALGORITHM                       MinDistance (A [0 ..n − 1] )

//Input: Array A [0 ..n − 1] of numbers

//Output: Minimum distance between two of its elements dmin ← ∞

for i ← 0 to n − 1 do

for j ← 0 to n − 1 do

if i  = j and |A[i] − A[j ]| < dmin dmin ← |A[i] − A[j ]|

return dmin

Make as many improvements as you can in this algorithmic solution to the problem. If you need to, you may change the algorithm altogether; if not, improve the implementation given.

One of the most influential books on problem solving, titled How To Solve It [Pol57], was written by the Hungarian-American mathematician George Polya´ (1887–1985). Polya´ summarized his ideas in a four-point summary. Find this summary on the Internet or, better yet, in his book, and compare it with the plan outlined in Section 1.2. What do they have in common? How are they different?

Related Topics

amozon

Privacy Policy , Terms and Conditions , DMCA Policy and Compliant

Copyright © 2018-2024 BrainKart.com; All Rights Reserved. Developed by Therithal info, Chennai.

Design and Analysis of Algorithms Tutorial

  • Design and Analysis of Algorithms
  • Basics of Algorithms
  • DAA - Introduction to Algorithms
  • DAA - Analysis of Algorithms
  • DAA - Methodology of Analysis
  • DAA - Asymptotic Notations & Apriori Analysis
  • DAA - Time Complexity
  • DAA - Master's Theorem
  • DAA - Space Complexities
  • Divide & Conquer
  • DAA - Divide & Conquer Algorithm
  • DAA - Max-Min Problem
  • DAA - Merge Sort Algorithm
  • DAA - Binary Search
  • DAA - Strassen's Matrix Multiplication
  • DAA - Karatsuba Algorithm
  • DAA - Towers of Hanoi
  • Greedy Algorithms
  • DAA - Greedy Algorithms
  • DAA - Travelling Salesman Problem
  • DAA - Prim's Minimal Spanning Tree
  • DAA - Kruskal's Minimal Spanning Tree
  • DAA - Dijkstra's Shortest Path Algorithm
  • DAA - Map Colouring Algorithm
  • DAA - Fractional Knapsack
  • DAA - Job Sequencing with Deadline
  • DAA - Optimal Merge Pattern
  • Dynamic Programming
  • DAA - Dynamic Programming
  • DAA - Matrix Chain Multiplication
  • DAA - Floyd Warshall Algorithm
  • DAA - 0-1 Knapsack Problem
  • DAA - Longest Common Subsequence Algorithm
  • DAA - Travelling Salesman Problem using Dynamic Programming
  • Randomized Algorithms
  • DAA - Randomized Algorithms
  • DAA - Randomized Quick Sort Algorithm
  • DAA - Karger's Minimum Cut Algorithm
  • DAA - Fisher-Yates Shuffle Algorithm
  • Approximation Algorithms
  • DAA - Approximation Algorithms
  • DAA - Vertex Cover Problem
  • DAA - Set Cover Problem
  • DAA - Travelling Salesperson Approximation Algorithm
  • Sorting Techniques
  • DAA - Bubble Sort Algorithm
  • DAA - Insertion Sort Algorithm
  • DAA - Selection Sort Algorithm
  • DAA - Shell Sort Algorithm
  • DAA - Heap Sort Algorithm
  • DAA - Bucket Sort Algorithm
  • DAA - Counting Sort Algorithm
  • DAA - Radix Sort Algorithm
  • DAA - Quick Sort Algorithm
  • Searching Techniques
  • DAA - Searching Techniques Introduction
  • DAA - Linear Search
  • DAA - Interpolation Search
  • DAA - Jump Search
  • DAA - Exponential Search
  • DAA - Fibonacci Search
  • DAA - Sublist Search
  • DAA - Hash Table
  • Graph Theory
  • DAA - Shortest Paths
  • DAA - Multistage Graph
  • DAA - Optimal Cost Binary Search Trees
  • Heap Algorithms
  • DAA - Binary Heap
  • DAA - Insert Method
  • DAA - Heapify Method
  • DAA - Extract Method
  • Complexity Theory
  • DAA - Deterministic vs. Nondeterministic Computations
  • DAA - Max Cliques
  • DAA - Vertex Cover
  • DAA - P and NP Class
  • DAA - Cook's Theorem
  • DAA - NP Hard & NP-Complete Classes
  • DAA - Hill Climbing Algorithm
  • DAA Useful Resources
  • DAA - Quick Guide
  • DAA - Useful Resources
  • DAA - Discussion
  • Selected Reading
  • UPSC IAS Exams Notes
  • Developer's Best Practices
  • Questions and Answers
  • Effective Resume Writing
  • HR Interview Questions
  • Computer Glossary

Design and Analysis of Algorithms Tutorial

Design and Analysis of Algorithms Tutorial

Daa online compiler or editor, why design and analysis of algorithms, design strategies, analysis of algorithms, applications of daa, who should learn daa, prerequisites to learn daa, daa jobs and opportunities, frequently asked questions about daa.

An Algorithm is a sequence of steps to solve a problem. It acts like a set of instructions on how a program should be executed. Thus, there is no fixed structure of an algorithm. Design and Analysis of Algorithms covers the concepts of designing an algorithm as to solve various problems in computer science and information technology, and also analyse the complexity of these algorithms designed.

The main aim of designing an algorithm is to provide a optimal solution for a problem. Not all problems must have similar type of solutions; an optimal solution for one problem may not be optimal for another. Therefore, we must adopt various strategies to provide feasible solutions for all types of problems.

This tutorial introduces the fundamental concepts of Designing Strategies, Complexity analysis of Algorithms, followed by problems on Graph Theory and Sorting methods. This tutorial also includes the basic concepts on Complexity theory.

In this tutorial, we will provide online compilers and editors to execute programs of all algorithms. The code is written in four different programming languages: C, C++, Java, Python. This will eliminate the need to install a local setup for all these languages.

For instance, let us execute the code for a simple linear search algorithm to work with these online compilers.

One computer problem might have several versions of a solution. In this case, every approach taken to solve the computer problem is correct. However, choosing the best-suited solution will improve the efficiency of the program.

There might be a misconception that smaller algorithms are best-suited solutions in most cases. But, a feasible solution is not based on the length of algorithm, but the one with efficient complexity (time and space complexity).

We study Design and Analysis of Algorithms to analyse the complexity of all the versions of a solution to a computer problem.

There are various types of strategies in order to design algorithms for various problems. Some of them are listed as follows −

  • Greedy Approach
  • Divide & Conquer Approach
  • Dynamic Programming Approach
  • Randomization Approach
  • Approximation Approach
  • Recursive Approach
  • Branch and Bound Approach

In this tutorial, we will see various algorithms of each approach to solve various problems.

To analyse the feasibility of an algorithm designed, we calculate the complexity of it. This is represented in three notations, called asymptotic notations. Asymptotic notations are as follows −

  • Worst-Case Scenario − Big Oh and Little Oh Notations
  • Best-Case Scenario − Big Omega and Little Omega Notations
  • Average-Case Scenario − Theta Notation

Each solution is analysed in all scenarios of the problem, and the solution having the best worst-case is said to be optimal. Thus, providing an efficient algorithm.

There are applications of Design and Analysis of Algorithms (DAA) in a wide range of fields. Here are some of the common fields where it is used:

  • Computer programming: Used in computer programming to solve problems efficiently. This includes developing algorithms for sorting, searching, and manipulating data structures.
  • Big data processing: Also used to develop and analyse algorithms for operations such as data mining, machine learning, and natural language processing, in order to handle large sets of data.
  • Networking: Network protocols and algorithms are developed for routing, flow control, congestion control, and network security. DAA is used to design and analyse these algorithms.
  • Artificial intelligence: Used to design and analyse algorithms for tasks in Artificial Intelligence such as computer vision, speech recognition, and natural language understanding.
  • Scientific computing: DAA in scientific computing is used to develop algorithms for operations like numerical integration, optimization, and simulation.
  • Game development: In game development, we use design and analysis of algorithms for pathfinding, collision detection, and physics simulation.
  • Cryptography: DAA is also used in the design and analysis of cryptographic algorithms, such as RSA and AES, which are used to secure data transmission and storage.

This tutorial has been designed for students pursuing a degree in any computer science, engineering, and/or information technology related fields. It attempts to help students to grasp the essential concepts involved in algorithm design.

The readers should have basic knowledge of programming and mathematics. The readers should know data structure very well. Moreover, it is preferred if the readers have basic understanding of Formal Language and Automata Theory.

Many top companies are actively recruiting experts in Design and Analysis of Algorithms, and they offer roles such as Software Engineer, Data Scientist, Machine Learning Engineer, and more. These companies need individuals who can solve complex problems, analyse data, and design algorithms to drive their business forward. Here is the list of few such companies −

  • JPMorgan Chase
  • Goldman Sachs
  • Johnson & Johnson

The demand for DAA professionals is continually growing across various sectors. By developing expertise in these areas, you can open up a wide range of career opportunities in some of the world's leading companies.

To get started, there are user-friendly tutorials and resources available to help you master DAA. These materials are designed to prepare you for technical interviews and certification exams, and you can learn at your own pace, anytime and anywhere.

There are many Frequently Asked Questions (FAQs) on Design and Analysis of Algorithms due to the complex nature of this concept. In this section, we will try to answer some of them briefly.

An algorithm is a set of instructions to solve a problem by performing calculations, data processing, or automating reasoning tasks. However, there are always multiple solutions to solving a problem. Design and Analysis of Algorithms provides various ways to design efficient algorithms to solve a problem by analysing their complexities.

Algorithm analysis is an important part of computational complexity theory. The complexity theory provides a theoretical estimation for the required algorithm resources to solve a computational problem. For instance, most algorithms are designed to work with input data of variable length. Analysis of algorithms determines the amount of time and space taken to execute such algorithms.

Here are the summarized list of tips which you can follow to learn Analysis of Algorithms.

  • Follow our tutorial step by step from the very beginning.
  • Read more articles, watch online courses or buy reference books on Algorithm Analysis to enhance your knowledge.
  • Try to design a small algorithm for a simple problem to check your knowledge in these concepts.

As algorithms are not language specific, using any programming language that you are comfortable with is recommended.

Our basic aim while designing an algorithm is to maintain the efficiency of the solution. Algorithms are used in almost all areas of computing. Hence, learning how to design an efficient algorithm is important.

To test the implementation of an algorithm, feed it with diverse types of inputs and observe the outputs generated. If the time taken by the algorithm to be executed and space complexity are efficient even in worst case inputs, your algorithm is feasible.

Time complexity of an algorithm, in general, is simply defined as the time taken by an algorithm to implement each statement in the code. Time complexity can be influenced by various factors like the input size, the methods used and the procedure. An algorithm is said to be the most efficient when the output is produced in the minimal time possible.

Space complexity is a function describing the amount of memory (space) an algorithm takes in terms of the amount of input to the algorithm. So, it is usually computed by combining the auxiliary space and the space used by input values.

To Continue Learning Please Login

Library homepage

  • school Campus Bookshelves
  • menu_book Bookshelves
  • perm_media Learning Objects
  • login Login
  • how_to_reg Request Instructor Account
  • hub Instructor Commons

Margin Size

  • Download Page (PDF)
  • Download Full Book (PDF)
  • Periodic Table
  • Physics Constants
  • Scientific Calculator
  • Reference & Cite
  • Tools expand_more
  • Readability

selected template will load here

This action is not available.

Engineering LibreTexts

1.1: Activity 1 - Overview of Algorithm Design and Analysis

  • Last updated
  • Save as PDF
  • Page ID 49341

  • Godfry Justo
  • University of Dar es Salaam via African Virtual University

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

\( \newcommand{\Span}{\mathrm{span}}\)

\( \newcommand{\id}{\mathrm{id}}\)

\( \newcommand{\kernel}{\mathrm{null}\,}\)

\( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\)

\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\)

\( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

\( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vectorC}[1]{\textbf{#1}} \)

\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

Introduction

There are compelling reasons to study algorithms. To put it simply, computer programs would not exist without algorithms. And with computer applications becoming indispensable in almost all aspects of our professional and personal lives, studying algorithms becomes a necessity for more and more people.

Another reason for studying algorithms is their usefulness in developing analytical skills. After all, algorithms can be seen as special kinds of solutions to problems, not answers but precisely defined procedures for getting answers. Consequently, specific algorithm design techniques can be interpreted as problem solving strategies that can be useful regardless of whether a computer is involved.

Activity Details

The term algorithm can be defined using any of the following statements:

  • An algorithm is a set of rules for carrying out calculation either by hand or on a machine.
  • An algorithm is a finite step-by-step procedure to achieve a required result.
  • An algorithm is a sequence of computational steps that transform the input into the output.
  • An algorithm is a sequence of operations performed on data that have to be organized in data structures.
  • An algorithm is an abstraction of a program to be executed on a physical machine (model of Computation).

The most famous algorithm in history dates well before the time of the ancient Greeks: this is the Euclid’s algorithm for calculating the greatest common divisor of two integers.

The Classic Multiplication Algorithm

Multiplication, the American way:

Multiply the multiplicand one after another by each digit of the multiplier taken from right to left.

multiplication_american.png

Multiplication, the English way:

Multiply the multiplicand one after another by each digit of the multiplier taken from left to right.

multiplication_english.png

Algorithmics is a branch of computer science that consists of designing and analyzing computer algorithms.

The “design” concern to:

  • The description of algorithm at an abstract level by means of a pseudo language, and
  • Proof of correctness that is, the algorithm solves the given problem in all cases.

The “analysis” deals with performance evaluation (complexity analysis).

We start with defining the model of computation, which is usually the Random Access Machine (RAM) model, but other models of computations can be use such as PRAM. Once the model of computation has been defined, an algorithm can be describe using a simple language (or pseudo language) whose syntax is close to programming language such as C or java.

Algorithm’s Performance

Two important ways to characterize the effectiveness of an algorithm are its space complexity and time complexity. Time complexity of an algorithm concerns with determining an expression of the number of steps needed as a function of the problem size. Since the step count measure is somewhat rough, one does not aim at obtaining an exact step count. Instead, one attempts only to get asymptotic bounds on the step count.

Asymptotic analysis makes use of the O (Big Oh) notation. Two other notational constructs used by computer scientists in the analysis of algorithms are Θ (Big Theta) notation and Ω (Big Omega) notation. The performance evaluation of an algorithm is obtained by tallying the number of occurrences of each operation when running the algorithm. The performance of an algorithm is evaluated as a function of the input size n and is to be considered modulo a multiplicative constant.

The following notations are commonly use notations in performance analysis and used to characterize the complexity of an algorithm.

Θ-Notation (Same order)

This notation bounds a function to within constant factors. We say f(n) = Θ(g(n)) if there exist positive constants n0, c1 and c2 such that to the right of n0 the value of f(n) always lies between c1 g(n) and c2 g(n) inclusive.

In the set notation, we write as follows:

Θ ( g(n)) = {f(n) : there exist positive constants c1, c2, and n0 such that 0 ≤ c1 g(n) ≤ f(n) ≤ c2 g(n) for all n ≥ n0}

We say that g(n) is an asymptotically tight bound for f(n)

Fig01.png

As shown in Figure 1.1.1 , graphically, for all values of n to the right of n0, the value of f(n) lies at or above c1 g(n) and at or below c2 g(n). In other words, for all n ≥ n0, the function f(n) is equal to g(n) to within a constant factor. We say that g(n) is an asymptotically tight bound for f(n).

In the set terminology, f(n) is said to be a member of the set Θ(g(n)) of functions. In other words, because O(g(n)) is a set, we could write

f(n) ∈ Θ(g(n))

to indicate that f(n) is a member of Θ(g(n)). Instead, we write

f(n) = Θ(g(n))

to express the same notation.

Historically, this notation is “f(n) = Θ(g(n))” although the idea that f(n) is equal to something called Θ(g(n)) is misleading.

Example: n2/2 − 2n = Θ(n2), with c1 = 1/4, c2 = 1/2, and n0 = 8.

Ο -Notation (Upper Bound)

This notation gives an upper bound for a function to within a constant factor. We write f(n) = O(g(n)) if there are positive constants n0 and c such that to the right of n0, the value of f(n) always lies on or below c g(n).

In the set notation, we write as follows: For a given function g(n), the set of functions

Ο(g(n)) = {f(n): there exist positive constants c and n0 such that 0 ≤ f(n) ≤ c g(n) for all n ≥ n0}

We say that the function g(n) is an asymptotic upper bound for the function f(n). We use Ο-notation to give an upper bound on a function, to within a constant factor.

Fig02.png

As shown in Figure 1.1.2 , graphically, for all values of n to the right of n0, the value of the function f(n) is on or below g(n). We write f(n) = O(g(n)) to indicate that a function f(n) is a member of the set Ο(g(n)) i.e.

f(n) ∈ Ο(g(n))

Note that f(n) = Θ(g(n)) implies f(n) = Ο(g(n)), since Θ-notation is a stronger notation than Ο-notation.

Example: 2n2 = Ο(n3), with c = 1 and n0 = 2.

Equivalently, we may also define f is of order g as follows:

If f(n) and g(n) are functions defined on the positive integers, then f(n) is Ο(g(n)) if and only if there is a c > 0 and an n0 > 0 such that

| f(n) | ≤ | g(n) | for all n ≥ n0

Ω-Notation (Lower Bound)

This notation gives a lower bound for a function to within a constant factor. We write f(n) = Ω(g(n)) if there are positive constants n0 and c such that to the right of n0, the value of f(n) always lies on or above c g(n).

Ω(g(n)) = {f(n) : there exist positive constants c and n0 such that 0 ≤ c g(n) ≤ f(n) for all n ≥ n0}

We say that the function g(n) is an asymptotic lower bound for the function f(n).

Fig03.png

Figure 1.1.3 provides an insight behind Ω-notation. For example, √n = (lg n), with c = 1 and n0 = 16.

Algorithm’s Analysis

The complexity of an algorithm is a function g(n) that gives the upper bound of the number of operation (or running time) performed by an algorithm when the input size is n.

There are two interpretations of upper bound.

  • Worst-case Complexity: the running time for any given size input will be lower than the upper bound except possibly for some values of the input where the maximum is reached.
  • Average-case Complexity: the running time for any given size input will be the average number of operations over all problem instances for a given size.

Because it is quite difficult to estimate the statistical behaviour of the input, most of the time we content ourselves to a worst case behaviour. Most of the time, the complexity of g(n) is approximated by its family O(f(n)) where f(n) is one of the following functions: n (linear complexity), log n (logarithmic complexity), na where a ≥ 2 (polynomial complexity), an (exponential complexity).

Once the complexity of an algorithm has been estimated, the question arises whether this algorithm is optimal. An algorithm for a given problem is optimal if its complexity reaches the lower bound over all the algorithms solving this problem. For example, any algorithm solving “the intersection of n segments” problem will execute at least n2 operations in the worst case even if it does nothing but print the output. This is abbreviated by saying that the problem has Ω(n2) complexity. If one finds an O(n2) algorithm that solves this problem, it will be optimal and of complexity Θ(n2).

Another technique for estimating the complexity of a problem is the transformation of problems, also called problem reduction. As an example, suppose we know a lower bound for a problem A, and that we would like to estimate a lower bound for a problem B. If we can transform A into B by a transformation step whose cost is less than that for solving A, then B has the same bound as A.

The Convex hull problem nicely illustrates “reduction” technique. A lower bound of Convex-hull problem is established by reducing the sorting problem (complexity: Θ(n log n)) to the Convex hull problem.

This activity introduced the basic concepts of algorithm design and analysis. The need for analysing the time and space complexity of algorithms was articulated and the notations for describing algorithm complexity were presented.

  • Distinguish between space and time complexity.
  • Briefly explain the commonly used notations in performance analysis.
  • Trending Now
  • Foundational Courses
  • Data Science
  • Practice Problem
  • Machine Learning
  • System Design
  • DevOps Tutorial

Top 50 Algorithms MCQs with Answers

 External entities in DFD are shown by:-

What is the time complexity of Floyd–Warshall algorithm to calculate all pair shortest path in a graph with n vertices?

O(n 2log(n) )

Theta(n 2log(n) )

Theta(n 4 )

Theta(n 3 )

The following statement is valid. log(n!) = [Tex]\\theta [/Tex](n log n).

Which of the following is not true about comparison-based sorting algorithms?

The minimum possible time complexity of a comparison-based sorting algorithm is O(n(log(n)) for a random input array

Any comparison based sorting algorithm can be made stable by using position as a criteria when two elements are compared

Counting Sort is not a comparison based sorting algorithm

Heap Sort is not a comparison based sorting algorithm.

Which of the following is not O(n 2 )?

n 3 /(sqrt(n))

  • Selection sort
  • Bubble sort
  • Insertion sort
  • Greedy method
  • Backtracking
  • Dynamic programming
  • Divide and Conquer

If we use Radix Sort to sort n integers in the range (n k/2 ,n k ], for some k>0 which is independent of n, the time taken would be?

Quick sort is run on 2 inputs shown below to sort in ascending order

Let C1 and C2 be the number of comparisons made for A and B respectively. Then

C1 < C2   

Cannot say anything for arbitrary n

Question 10

Assume that we use Bubble Sort to sort n distinct elements in ascending order. When does the best case of Bubble Sort occur?

When elements are sorted in ascending order

When elements are sorted in descending order

When elements are not sorted by any order

There is no best case for Bubble Sort. It always takes O(n*n) time

There are 50 questions to complete.

Problem solving in Artificial Intelligence - javatpoint/Artificial-Intelligence GitHub Wiki

What is problem solving agent in artificial intelligence.

Reflex agent AI maps states to action. If these agents are unable to function in an environment that is too complex for them to map, the problem is dissolved and sent to a domain that solves the problem. This breaks down the problem into smaller areas and solves each one. The desired results will be achieved by the final integrated action.

Different types of problem-solving agents can be defined based on the problem and the working domain. They can then be used at an atomic level, without any internal state that is visible to the problem-solving algorithm.

Problem-solving agents perform precisely by defining problems, and offering multiple solutions. Problem solving can be described as an aspect of artificial intelligence . It uses a variety of techniques, such as B-tree, tree, and heuristic algorithms, to solve a problem.

Another way to put it is that a problem-solving agency is one that is results-driven and always focuses on achieving the goals.

How to solve problems in AI?

AI is directly related to the nature of human activity and humans. We need to take a finite number of steps in order to solve a problem that makes it easy for humans.

These are the steps required to solve a problem:

Goal formulation: This is the first step in Problem solving in Artificial Intelligence . It arranges steps to form a goal/target that requires action in order to reach the goal. AI agents are used today to formulate the goal.

Problem formulation: This is one of the key steps in problem-solving. It determines the best course of action to reach the goal. This core part of AI is dependent on the software agent, which includes the following components to solve the problem.

The components to solve the problem

Initial State: This state needs an initial state for the problem that starts the AI agent towards a specific goal. This state allows new methods to initiate problem domain solving by a particular class.

Take action: This stage of problem formulation uses function with a particular class taken from the initial state. All possible actions are done in this stage.

Transition: This stage of problem formulation combines the action taken by the previous stage and gathers the final stage for forwarding it to the next stage.

Test your goal: This stage determines if the specified goal was achieved using the integrated transition model. If the goal is achieved, stop the action and move on to the next stage which determines the cost of achieving the goal.

Costing of a path: This part of problem-solving numerical assigns what the cost to reach the goal. This requires both hardware and human labor.

Javatpoint Logo

Artificial Intelligence

Control System

  • Interview Q

Intelligent Agent

Problem-solving, adversarial search, knowledge represent, uncertain knowledge r., subsets of ai, artificial intelligence mcq, related tutorials.

JavaTpoint

  • Send your Feedback to [email protected]

Help Others, Please Share

facebook

Learn Latest Tutorials

Splunk tutorial

Transact-SQL

Tumblr tutorial

Reinforcement Learning

R Programming tutorial

R Programming

RxJS tutorial

React Native

Python Design Patterns

Python Design Patterns

Python Pillow tutorial

Python Pillow

Python Turtle tutorial

Python Turtle

Keras tutorial

Preparation

Aptitude

Verbal Ability

Interview Questions

Interview Questions

Company Interview Questions

Company Questions

Trending Technologies

Artificial Intelligence

Cloud Computing

Hadoop tutorial

Data Science

Angular 7 Tutorial

Machine Learning

DevOps Tutorial

B.Tech / MCA

DBMS tutorial

Data Structures

DAA tutorial

Operating System

Computer Network tutorial

Computer Network

Compiler Design tutorial

Compiler Design

Computer Organization and Architecture

Computer Organization

Discrete Mathematics Tutorial

Discrete Mathematics

Ethical Hacking

Ethical Hacking

Computer Graphics Tutorial

Computer Graphics

Software Engineering

Software Engineering

html tutorial

Web Technology

Cyber Security tutorial

Cyber Security

Automata Tutorial

C Programming

C++ tutorial

Data Mining

Data Warehouse Tutorial

Data Warehouse

RSS Feed

IMAGES

  1. DAA 1 7 Fundamentals of Algorithmic problem solving

    fundamentals of algorithmic problem solving javatpoint

  2. PPT

    fundamentals of algorithmic problem solving javatpoint

  3. Fundamentals of Algorithmic Problem Solving

    fundamentals of algorithmic problem solving javatpoint

  4. PPT

    fundamentals of algorithmic problem solving javatpoint

  5. Fundamentals of Algorithmic Problem Solving

    fundamentals of algorithmic problem solving javatpoint

  6. Chapter 01

    fundamentals of algorithmic problem solving javatpoint

VIDEO

  1. C Programming Introduction : Your First Steps into Coding

  2. Algorithmic Problem Solving

  3. GE 3151 -PSPP- Algorithmic problem solving techniques

  4. GE3151 |Problem solving and Python Programming

  5. ADA Unit1- Fundamentals of Algorithmic Problem Solving by AshwiniG T

  6. Class 9

COMMENTS

  1. DAA Algorithm

    The divide and conquer over algorithm for solving this problem works as follows: Divide: The set of points is split into halves. ... The greedy method is a problem-solving strategy in the design and analysis of algorithms. It is a simple and effective approach to solving optimization problems that involves making a series of choices that result ...

  2. DAA Tutorial

    DAA Tutorial. Our DAA Tutorial is designed for beginners and professionals both. Our DAA Tutorial includes all topics of algorithm, asymptotic analysis, algorithm control structure, recurrence, master method, recursion tree method, simple sorting algorithm, bubble sort, selection sort, insertion sort, divide and conquer, binary search, merge ...

  3. Fundamentals of Algorithmic Problem Solving

    An input to an algorithm specifies an instance of the problem the algorithm solves. It is very important to specify exactly the set of instances the algorithm needs to handle. (As an example, recall the variations in the set of instances for the three greatest common divisor algorithms discussed in the previous section.)

  4. Algorithm (Data Structures)

    Now we will look an example of an algorithm in programming. We will write an algorithm to add two numbers entered by the user. The following are the steps required to add two numbers entered by the user: Step 1: Start. Step 2: Declare three variables a, b, and sum. Step 3: Enter the values of a and b.

  5. Algorithms Tutorial

    Algorithms Tutorial. Algorithm is a step-by-step procedure for solving a problem or accomplishing a task. In the context of data structures and algorithms, it is a set of well-defined instructions for performing a specific computational task. Algorithms are fundamental to computer science and play a very important role in designing efficient ...

  6. Design and Analysis of Algorithms Tutorial

    An algorithm is a set of instructions to solve a problem by performing calculations, data processing, or automating reasoning tasks. However, there are always multiple solutions to solving a problem. Design and Analysis of Algorithms provides various ways to design efficient algorithms to solve a problem by analysing their complexities.

  7. Design and Analysis of Algorithms

    Design and Analysis of Algorithms. Design and Analysis of Algorithms is a fundamental aspect of computer science that involves creating efficient solutions to computational problems and evaluating their performance. DSA focuses on designing algorithms that effectively address specific challenges and analyzing their efficiency in terms of time ...

  8. Learn Data Structures and Algorithms

    Data Structures and Algorithms (DSA) refer to the study of methods for organizing and storing data and the design of procedures (algorithms) for solving problems, which operate on these data structures. DSA is one of the most important skills that every computer science student must have. It is often seen that people with good knowledge of these technologies are better programmers than others ...

  9. 1.1: Activity 1

    Programming and Computation Fundamentals Algorithm Design and Analysis (Justo) ... algorithm for a given problem is optimal if its complexity reaches the lower bound over all the algorithms solving this problem. For example, any algorithm solving "the intersection of n segments" problem will execute at least n2 operations in the worst case ...

  10. PDF Chapter 3: Algorithmic Problem Solving

    An algorithm, whose characteristics will be discussed later, is a form that embeds the complete logic of the solution. Its formal written version is called a program, or code. Thus, algorithmic problem solving actually comes in two phases: derivation of an algorithm that solves the problem, and conversion of the algorithm into code.

  11. PDF Principles of Algorithmic Problem Solving

    Algorithmic problem solving is the art of formulating efficient methods that solve problems of a mathematical nature. From the many numerical algo-rithms developed by the ancient Babylonians to the founding of graph theory by Euler, algorithmic problem solving has been a popular intellectual pursuit during the last few thousand years.

  12. Algorithm Definition

    An algorithm is a procedure that describes a set of instructions that must be carried out in a specific order to get the desired result. An algorithm may be performed in multiple computer languages since algorithms are often designed independently of the underlying languages. A few properties of an algorithm are clarity, excellence, efficacy ...

  13. PDF 2. Fundamentals of Algorithmic Problem Solving

    solving it approximately. →An algorithm used to solve the problem exactly and produce correct result is called an exact algorithm. →If the problem is so complex and not able to get exact solution, then we have to choose an algorithm called an approximation algorithm. i.e., produces an →Approximate answer. E.g., extracting square roots ...

  14. Algorithmic problem solving

    Techniques are, • Understanding the problem • Ascertain the capabilities of the computational device • Exact/approximate solution • Decide on the appropriate data structure • Algorithm design technique • Methods of specifying an algorithm • Proving an algorithm correctness • Analyzing an algorithm • Coding an algorithm Problem ...

  15. Quiz about Top 50 Algorithms MCQs with Answers

    Top 50 Algorithms MCQs with Answers Quiz will help you to test and validate your DSA Quiz knowledge. It covers a variety of questions, from basic to advanced. The quiz contains 50 questions. You just have to assess all the given options and click on the correct answer.

  16. Algorithmic Problem Solving

    In this course, hosted by Justin and Vonne, we delve into the fundamentals of problem-solving in programming. Starting with algorithmic problem-solving approaches, we gradually progress to tackling specific coding challenges. Episodes cover various problem-solving scenarios, such as finding the maximum number in a list, reversing a string ...

  17. Problem Solving Techniques in AI

    Artificial intelligence (AI) problem-solving often involves investigating potential solutions to problems through reasoning techniques, making use of polynomial and differential equations, and carrying them out and use modelling frameworks. A same issue has a number of solutions, that are all accomplished using an unique algorithm.

  18. Fundamentals of Algorithmic Problem Solving

    The next principal decision is to choose between solving the problem exactly and solving it approximately. An algorithm used to solve the problem exactly and produce correct result is called an exact algorithm. If the problem is so complex and not able to get exact solution, then we have to choose an algorithm called an approximation algorithm ...

  19. PDF UNIT-I Notion of an Algorithm

    An algorithm is a sequence of unambiguous instructions for solving a problem in a finite amount of time. 11. Write about Notion of Algorithm. Problem Algorithm Input Computer Output 12. Write a short note on Algorithm Design and Analysis of Process. Understand the problem Decide on Computational Device

  20. Zero Matrix Problem in Java

    The Zero Matrix Problem involves taking a given matrix and, for every element that is zero, setting its entire row and column to zero. The objective is to modify the matrix in-place, meaning without using additional memory. In this example, the element at position (1,1) is zero, so we set the entire first row and first column to zero. The zero ...

  21. Problem solving in Artificial Intelligence

    Problem-solving agents perform precisely by defining problems, and offering multiple solutions. Problem solving can be described as an aspect of artificial intelligence. It uses a variety of techniques, such as B-tree, tree, and heuristic algorithms, to solve a problem. Another way to put it is that a problem-solving agency is one that is ...

  22. Artificial Intelligence MCQ (Multiple Choice Questions)

    Artificial Intelligence Multiple Choice Questions. 1) Artificial Intelligence is about_____. Playing a game on Computer. Making a machine Intelligent. Programming on Machine with your Own Intelligence. Putting your intelligence in Machine. Show Answer. Workspace. 2) Who is known as the -Father of AI"?

  23. Search Algorithms in AI

    Problem-solving agents are the goal-based agents and use atomic representation. In this topic, we will learn various problem-solving search algorithms. Search Algorithm Terminologies: Search: Searchingis a step by step procedure to solve a search-problem in a given search space. A search problem can have three main factors: Search Space: Search ...