This article shows results of repeating the empirical analysis from course: Algorithms, Part I - lesson: Analysis of Algorithms.

The lesson illustrates how to use timing to measure the performance of a program.

It gives a example of the 3 sum program written in Java, and measures the running time of the program given different inputs.

The idea of 3 sum is straightforward: given a number of integers, calculates the total numbers such that the sum of any three numbers equals 0.

For example, [1, 2 , -3, -1, -1], it has two of any sum of three numbers equals 0:

• 1 + 2 + (-3) = 0
• 2 + (-1) + (-1) = 0

A simple brute-force algorithm could be:

for i in n:
for j in n:
for k in n:
if n[i] + n[j] + n[k] = 0:
count += 1


Obviously, it is an $O(n^3)$ algorithm. However, we still want to use a program to verify it.

I rewrite the program in python, and times the performance of the program using similar method in the lesson.

The code snippet that implements the algorithm:

def three_sum(ran_list):
count = 0
ran_len = len(ran_list)
for i in range(ran_len):
for j in range(i+1, ran_len):
for k in range(j+1, ran_len):
if(ran_list[i] + ran_list[j] + ran_list[k] == 0):
count += 1
print(count)
return count


I tested it in different inputs (number of integers):

Num of Int Time (s) +/- 0.001s
50 0.003
100 0.024
200 0.180
400 1.577
800 11.975
1600 94.975

The dot figure:

The corresponding loglog plot:

We can see that the relation between the input and the running time is a $N^3$.

Some concerns:

In the illustrate Java program from the lecture, to process 2K ints, it takes 3.64 seconds.

However, in the python program, to process 1.6K ints, it need to take 94.975 seconds. Why it is so slow in python?