Indeed
I've been using this optimization for years, for example I optimized/simplified more_itertools.first from
def first(iterable, default=_marker):
try:
return next(iter(iterable))
except StopIteration as e:
(empty case handling here)
to:
def first(iterable, default=_marker):
for item in iterable:
return item
(empty case handling here)
In the issue I showed that it was faster for various iterables (especially for empty ones). Here are benchmark results with various arguments (columns: time of the old implementation in ns, time of proposal, time difference, arguments):
137 118 -19 (0,),
140 121 -19 [0],
150 122 -28 "0",
200 174 -26 {0},
142 123 -19 {0: 0},
115 92 -23 iter((0,) * 10000),
114 92 -22 iter([0] * 10000),
113 92 -21 repeat(0),
140 118 -22 (x for x in repeat(0)),
220 196 -24 Infinite(),
458 124 -334 (), None
457 126 -331 [], None
454 124 -330 "", None
463 132 -331 set(), None
455 125 -330 {}, None
425 98 -327 iter(()), None
426 98 -328 iter([]), None
422 98 -324 repeat(None, 0), None
429 98 -331 (x for x in ()), None
711 506 -205 Empty(), None
Why
It's not really about modern optimizations. With return next(iter(d)) you load two globals and have two calls, all in Python. With for key in d: return key you have lower-level equivalents of iter and next, which is evidently so much faster that it's a win despite its additional saving and loading of the local variable key. Has been like this for a long time, all this was already the case in Python 2.
(In the above case of more_itertools.first, the for way also saves entering try, although CPython has had "Zero cost" exception handling for a while when no exception is raised.)
More: The for-break(-else) pattern
I've used it not just with return but more often with an unconditional break, for example in Optimize heapq.merge with for-break(-else) pattern?. Not just for speed but also for shorter/nicer code, for example the initialization where an iterator it (or its __next__) is added to the heap if it's not empty:
Current:
try:
next = it.__next__
h_append([next(), order * direction, next])
except StopIteration:
pass
Proposal:
for value in it:
h_append([value, order * direction, it])
break
More: nested fors over the same iterator
Yet another way I've been using for to quickly get just the first value with an iterator is nested loops over the same iterator, for example my more_itertools.all_equal proposal (you can find benchmarks there), which got adopted with small modifications:
def all_equal(iterable):
groups = groupby(iterable)
for first in groups:
for second in groups:
return False
return True
In that case, the inner loop had an unconditional return. In other cases, my inner loop exhausts the iterator, for example in my improvement of more_itertools.mark_ends just last week:
def mark_ends__improved_mystyle(iterable):
it = iter(iterable)
for a in it:
first = True
for b in it:
yield first, False, a
a = b
first = False
yield first, True, a
The previous/ alternative code achieved the "get first value or do nothing" with four lines instead of my one line for a in it: and nesting:
try:
b = next(it)
except StopIteration:
return
Btw
Your func1 can be optimized by changing list(d)[0] to [*d][0], which is faster as it likewise replaced loading and calling the global list.
And if you really assume that your dict has exactly one item as you said, then you can improve your func2 by changing rv, *_ = d to rv, = d.
iterthennext, and then, well nothing since it has returned. Not it is not surprising that both are more efficient than building a list or a tuple. But it is interesting thatforis faster thaniter/next. Probably because of the slight overhead of having to perform some python function calls, whenfordoes that directly without calls (edit: and, yes, indeed — just seeing bruno's comment — also the object holding iterator itself)iterandnext. Since these are global objects, it is not free. For me, it is not obvious which has a greater impact.for key in diterate without creating an iterator object?