[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Interesting performance question

I have just noticed an oddity :

Using python 3.6 building a tuple like this :

my_tuple = tuple([x*x for x in range(1,1000)])

is about 1/3 quicker than

 ??? my_tuple = tuple(x*x for x in range(1,1000))

Measurements :

    $? python3 -m timeit 'my_tuple = tuple([x*x for x in range(1,1000)])'
    10000 loops, best of 3: 33.5 usec per loop

    $? python3 -m timeit 'my_tuple = tuple(x*x for x in range(1,1000))'
    10000 loops, best of 3: 44.1 usec per loop

My first assumption was that on the first example (with the list 
comprehension) the tuple constructor is able allocate the right amount 
of memory immediately, and therefore doesn't have the cost overhead of 
having to resize compared to the version having to resize every so often 
as new items are fetched from the generator.

One question though - wouldn't the list generator have a similar 
resizing issue, since the list comprehension wont predict the final size 
of the list ahead of time.

Is there a good explanation of the performance difference - I don't 
think that the memory resizing overhead is the whole answer.


Tony Flury