1
Current Location:
>
Algorithms
Journey Through Python Algorithms: From Bubble Sort to Dynamic Programming for More Efficient Code
Release time:2024-11-10 22:07:01 Number of reads: 11
Copyright Statement: This article is an original work of the website and follows the CC 4.0 BY-SA copyright agreement. Please include the original source link and this statement when reprinting.

Article link: https://junyayun.com/en/content/aid/1356?s=en%2Fcontent%2Faid%2F1356

Hello, Python enthusiasts! Today, we’re going to talk about algorithms in Python. Algorithms might sound a bit daunting, but don’t worry, I’ll explain them in the simplest way possible to help you appreciate their beauty. Ready? Let’s embark on this exciting journey through Python algorithms!

What’s an Algorithm?

First, you might ask, “What exactly is an algorithm?” Simply put, an algorithm is a method and a series of steps to solve a problem. Just like a recipe for cooking, an algorithm is a “recipe” for programming. It tells us how to process data step by step to get the desired result.

Why should we learn algorithms? Imagine you need to find a specific book among a large pile. Would you search from front to back? You could, but if there are many books, that would be exhausting. If the books are sorted alphabetically, you could use binary search to quickly find your target. See, that’s the magic of algorithms! They allow us to solve problems in smarter and more efficient ways.

Bubble Sort

When it comes to algorithms, sorting algorithms are a must-mention. Among them, the simplest and easiest to understand is bubble sort.

The principle of bubble sort is simple: just like bubbles rise in water, we repeatedly traverse the list to be sorted, comparing adjacent numbers and swapping them if they’re out of order. Sounds simple, right? Let’s see how to implement it in Python:

def bubble_sort(arr):
    n = len(arr)
    for i in range(n):
        for j in range(0, n-i-1):
            if arr[j] > arr[j+1]:
                arr[j], arr[j+1] = arr[j+1], arr[j]
    return arr


my_list = [64, 34, 25, 12, 22, 11, 90]
print(bubble_sort(my_list))

See, that’s the Python implementation of bubble sort. Isn’t it intuitive? In each loop, we "bubble" the largest number to the end. Repeat this process, and you eventually get a sorted list.

However, did you notice a problem? If the list is very long, won’t this method be slow? Indeed, the time complexity of bubble sort is O(n^2), which isn’t very efficient for large datasets. So, is there a faster sorting method?

Quick Sort

Speaking of faster sorting methods, quick sort is a must-mention. Quick sort is a divide-and-conquer strategy, and its basic idea is:

  1. Choose a pivot
  2. Partition the list into two parts: elements smaller than the pivot on the left and elements larger on the right
  3. Recursively quick sort the left and right parts

Sounds complex? Don’t worry, look at the Python implementation below, and you’ll understand:

def quick_sort(arr):
    if len(arr) <= 1:
        return arr
    else:
        pivot = arr[0]
        less = [x for x in arr[1:] if x <= pivot]
        greater = [x for x in arr[1:] if x > pivot]
        return quick_sort(less) + [pivot] + quick_sort(greater)


my_list = [64, 34, 25, 12, 22, 11, 90]
print(quick_sort(my_list))

See, that’s the Python implementation of quick sort. Isn’t it more concise than bubble sort? Moreover, the average time complexity of quick sort is O(nlogn), much faster than bubble sort.

You might ask, “How is the pivot chosen?” Good question! In this implementation, we simply chose the first element as the pivot. However, in practical applications, the pivot choice affects performance. There are many strategies for choosing a pivot, such as random selection or choosing the middle element.

Binary Search

After sorting, let’s talk about searching. Remember the book-finding example we mentioned earlier? If the books are sorted, we can use binary search.

The idea of binary search is simple: each time, check the middle element. If it’s what we’re looking for, great; if not, determine whether the target is in the left or right half, then continue searching in that half. This way, we eliminate half the possibilities each time, greatly improving search efficiency.

Let’s see how binary search is implemented in Python:

def binary_search(arr, target):
    left, right = 0, len(arr) - 1
    while left <= right:
        mid = (left + right) // 2
        if arr[mid] == target:
            return mid
        elif arr[mid] < target:
            left = mid + 1
        else:
            right = mid - 1
    return -1  # Return -1 if not found


my_list = [11, 12, 22, 25, 34, 64, 90]
print(binary_search(my_list, 25))
print(binary_search(my_list, 91))

See, that’s the Python implementation of binary search. Its time complexity is O(log n), which is very efficient for large datasets.

However, note that binary search only works on sorted lists. If the list is unsorted, we need to sort it first before searching. That’s why the choice of sorting algorithm becomes important.

Dynamic Programming

By now, you might be thinking, “These algorithms are useful, but they seem to solve only specific types of problems. Is there a more general method to solve more complex problems?”

Great question! This brings us to dynamic programming. Dynamic programming is a method for solving complex problems by breaking them down into simpler subproblems.

Sounds abstract? Don’t worry, let’s look at a classic example: the Fibonacci sequence.

The Fibonacci sequence is defined as follows: the first number is 0, the second is 1, and from the third number onward, each number is the sum of the previous two. That is, the sequence is: 0, 1, 1, 2, 3, 5, 8, 13, 21, ...

If we implement this recursively, it might look like this:

def fib(n):
    if n <= 1:
        return n
    return fib(n-1) + fib(n-2)


print(fib(10))

This implementation looks intuitive, right? However, as n gets larger, this function becomes very slow. Why? Because it repeatedly recalculates the same subproblems.

This is where dynamic programming comes in. We can use an array to store already calculated results, avoiding repeated calculations:

def fib_dp(n):
    if n <= 1:
        return n
    dp = [0] * (n+1)
    dp[1] = 1
    for i in range(2, n+1):
        dp[i] = dp[i-1] + dp[i-2]
    return dp[n]


print(fib_dp(100))

See, that’s the charm of dynamic programming! By storing intermediate results, it avoids repeated calculations and greatly improves efficiency. Using this method, we can easily calculate the 100th Fibonacci number!

Conclusion

Well, our journey through Python algorithms comes to a temporary halt. We started with simple bubble sort, moved through quick sort and binary search, and finally reached the realm of dynamic programming. Have you felt the allure of algorithms?

Remember, algorithms are not just abstract concepts; they are powerful tools for solving real problems. In your programming career, you’ll encounter various problems, and algorithms are like your Swiss Army knife, helping you solve them more efficiently.

So, are you ready to dive deeper into the world of algorithms? Or are you interested in a specific algorithm? Feel free to tell me in the comments, and we can discuss, learn, and grow together!

On the journey of programming, let’s walk hand in hand, using the power of algorithms to make our code more efficient and elegant!

The Journey of Learning Algorithms: Simplifying Complexity with Python
Previous
2024-10-24 10:34:37
Common Algorithms and Data Structures in Python: From Beginner to Expert
2024-11-11 06:05:02
Next
Related articles